Search Results: "bernat"

9 February 2017

Steve Kemp: Old packages are interesting.

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional. At the end of his post he said :
evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don t remember why I didn t pick it.
That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux: I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:
 firefox '%s'
That meant there was an obvious security problem. It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

8 February 2017

Steve Kemp: Old packages are interesting.

Recently Vincent Bernat wrote about writing his own simple terminal, using vte. That was a fun read, as the sample code built really easily and was functional. At the end of his post he said :
evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don t remember why I didn t pick it.
That set me off looking at evilvte, and it was one of those rare projects which seems to be pretty stable, and also hasn't changed in any recent release of Debian GNU/Linux: I wonder if it would be possible to easily generate a list of packages which have the same revision in multiple distributions? Anyway I had a look at the source, and unfortunately spotted that it didn't entirely handle clicking on hyperlinks terribly well. Clicking on a link would pretty much run:
 firefox '%s'
That meant there was an obvious security problem. It is a great terminal though, and it just goes to show how short, simple, and readable such things can be. I enjoyed looking at the source, and furthermore enjoyed using it. Unfortunately due to a dependency issue it looks like this package will be removed from stretch.

7 February 2017

Vincent Bernat: Write your own terminal emulator

I was an happy user of rxvt-unicode until I got a laptop with an HiDPI display. Switching from a LoDPI to a HiDPI screen and back was a pain: I had to manually adjust the font size on all terminals or restart them. VTE is a library to build a terminal emulator using the GTK+ toolkit, which handles DPI changes. It is used by many terminal emulators, like GNOME Terminal, evilvte, sakura, termit and ROXTerm. The library is quite straightforward and writing a terminal doesn t take much time if you don t need many features. Let s see how to write a simple one.

A simple terminal Let s start small with a terminal with the default settings. We ll write that in C. Another supported option is Vala.
#include <vte/vte.h>
int
main(int argc, char *argv[])
 
    GtkWidget *window, *terminal;
    /* Initialise GTK, the window and the terminal */
    gtk_init(&argc, &argv);
    terminal = vte_terminal_new();
    window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
    gtk_window_set_title(GTK_WINDOW(window), "myterm");
    /* Start a new shell */
    gchar **envp = g_get_environ();
    gchar **command = (gchar *[]) g_strdup(g_environ_getenv(envp, "SHELL")), NULL  ;
    g_strfreev(envp);
    vte_terminal_spawn_sync(VTE_TERMINAL(terminal),
        VTE_PTY_DEFAULT,
        NULL,       /* working directory  */
        command,    /* command */
        NULL,       /* environment */
        0,          /* spawn flags */
        NULL, NULL, /* child setup */
        NULL,       /* child pid */
        NULL, NULL);
    /* Connect some signals */
    g_signal_connect(window, "delete-event", gtk_main_quit, NULL);
    g_signal_connect(terminal, "child-exited", gtk_main_quit, NULL);
    /* Put widgets together and run the main loop */
    gtk_container_add(GTK_CONTAINER(window), terminal);
    gtk_widget_show_all(window);
    gtk_main();
 
You can compile it with the following command:
gcc -O2 -Wall $(pkg-config --cflags --libs vte-2.91) term.c -o term
And run it with ./term: Simple VTE-based terminal

More features From here, you can have a look at the documentation to alter behavior or add more features. Here are three examples.

Colors You can define the 16 basic colors with the following code:
#define CLR_R(x)   (((x) & 0xff0000) >> 16)
#define CLR_G(x)   (((x) & 0x00ff00) >>  8)
#define CLR_B(x)   (((x) & 0x0000ff) >>  0)
#define CLR_16(x)  ((double)(x) / 0xff)
#define CLR_GDK(x) (const GdkRGBA)  .red = CLR_16(CLR_R(x)), \
                                    .green = CLR_16(CLR_G(x)), \
                                    .blue = CLR_16(CLR_B(x)), \
                                    .alpha = 0  
vte_terminal_set_colors(VTE_TERMINAL(terminal),
    &CLR_GDK(0xffffff),
    &(GdkRGBA)  .alpha = 0.85  ,
    (const GdkRGBA[]) 
        CLR_GDK(0x111111),
        CLR_GDK(0xd36265),
        CLR_GDK(0xaece91),
        CLR_GDK(0xe7e18c),
        CLR_GDK(0x5297cf),
        CLR_GDK(0x963c59),
        CLR_GDK(0x5E7175),
        CLR_GDK(0xbebebe),
        CLR_GDK(0x666666),
        CLR_GDK(0xef8171),
        CLR_GDK(0xcfefb3),
        CLR_GDK(0xfff796),
        CLR_GDK(0x74b8ef),
        CLR_GDK(0xb85e7b),
        CLR_GDK(0xA3BABF),
        CLR_GDK(0xffffff)
 , 16);
While you can t see it on the screenshot1, this also enables background transparency. Color rendering

Miscellaneous settings VTE comes with many settings to change the behavior of the terminal. Consider the following code:
vte_terminal_set_scrollback_lines(VTE_TERMINAL(terminal), 0);
vte_terminal_set_scroll_on_output(VTE_TERMINAL(terminal), FALSE);
vte_terminal_set_scroll_on_keystroke(VTE_TERMINAL(terminal), TRUE);
vte_terminal_set_rewrap_on_resize(VTE_TERMINAL(terminal), TRUE);
vte_terminal_set_mouse_autohide(VTE_TERMINAL(terminal), TRUE);
This will:
  • disable the scrollback buffer,
  • not scroll to the bottom on new output,
  • scroll to the bottom on keystroke,
  • rewrap content when terminal size change, and
  • hide the mouse cursor when typing.

Update the window title An application can change the window title using XTerm control sequences (for example, with printf "\e]2;$ title \a"). If you want the actual window title to reflect this, you need to define this function:
static gboolean
on_title_changed(GtkWidget *terminal, gpointer user_data)
 
    GtkWindow *window = user_data;
    gtk_window_set_title(window,
        vte_terminal_get_window_title(VTE_TERMINAL(terminal))?:"Terminal");
    return TRUE;
 
Then, connect it to the appropriate signal, in main():
g_signal_connect(terminal, "window-title-changed", 
    G_CALLBACK(on_title_changed), GTK_WINDOW(window));

Final words I don t need much more as I am using tmux inside each terminal. In my own copy, I have also added the ability to complete a word using ones from the current window or other windows (also known as dynamic abbrev expansion). This requires to implement a terminal daemon to handle all terminal windows with one process, similar to urxvtcd. While writing a terminal from scratch 2 suits my need, it may not be worth it. evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don t remember why I didn t pick it. UPDATED: evilvte has not seen an update since 2014. Its GTK+3 support is buggy. It doesn t support the latest versions of the VTE library. Therefore, it s not a good idea to use it. You should also note that the primary goal of VTE is to be a library to support GNOME Terminal. Notably, if a feature is not needed for GNOME Terminal, it won t be added to VTE. If it already exists, it will likely to be deprecated and removed.

  1. Transparency is handled by the composite manager (Compton, in my case).
  2. For some definition of scratch since the hard work is handled by VTE.

5 February 2017

Vincent Bernat: A Makefile for your Go project

My most loathed feature of Go is the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. Hopefully, this issue is slowly starting to be accepted by the main authors. In the meantime, you can workaround this problem with more opinionated tools (like gb) or by crafting your own Makefile. For the later, you can have a look at Filippo Valsorda s example or my own take which I describe in more details here. This is not meant to be an universal Makefile but a relatively short one with some batteries included. It comes with a simple Hello World! application.

Project structure For a standalone project, vendoring is a must-have1 as you cannot rely on your dependencies to not introduce backward-incompatible changes. Some packages are using versioned URLs but most of them aren t. There is currently no standard tool to handle vendoring. My personal take is to vendor all dependencies with Glide. It is a good practice to split an application into different packages while the main one stay fairly small. In the hellogopher example, the CLI is handled in the cmd package while the application logic for printing greetings is in the hello package:
.
  cmd/
    hello.go
    root.go
    version.go
  glide.lock (generated)
  glide.yaml
  vendor/ (dependencies will go there)
  hello/
    root.go
    root_test.go
  main.go
  Makefile
  README.md

Down the rabbit hole Let s take a look at the various features of the Makefile.

GOPATH handling Since all dependencies are vendored, only our own project needs to be in the GOPATH:
PACKAGE  = hellogopher
GOPATH   = $(CURDIR)/.gopath
BASE     = $(GOPATH)/src/$(PACKAGE)
$(BASE):
    @mkdir -p $(dir $@)
    @ln -sf $(CURDIR) $@
The base import path is hellogopher, not github.com/vincentbernat/hellogopher: this shortens imports and makes them easily distinguishable from imports of dependency packages. However, your application won t be go get-able. This is a personal choice and can be adjusted with the $(PACKAGE) variable. We just create a symlink from .gopath/src/hellogopher to our root directory. The GOPATH environment variable is automatically exported to the shell commands of the recipes. Any tool should work fine after changing the current directory to $(BASE). For example, this snippet builds the executable:
.PHONY: all
all:   $(BASE)
    cd $(BASE) && $(GO) build -o bin/$(PACKAGE) main.go

Vendoring dependencies Glide is a bit like Ruby s Bundler. In glide.yaml, you specify what packages you need and the constraints you want on them. Glide computes a glide.lock file containing the exact versions for each dependencies (including recursive dependencies) and download them in the vendor/ folder. I choose to check into the VCS both glide.yaml and glide.lock files. It s also possible to only check in the first one or to also check in the vendor/ directory. A work-in-progress is currently ongoing to provide a standard dependency management tool with a similar workflow. We define two rules2:
GLIDE = glide
glide.lock: glide.yaml   $(BASE)
    cd $(BASE) && $(GLIDE) update
    @touch $@
vendor: glide.lock   $(BASE)
    cd $(BASE) && $(GLIDE) --quiet install
    @ln -sf . vendor/src
    @touch $@
We use a variable to invoke glide. This enables a user to easily override it (for example, with make GLIDE=$GOPATH/bin/glide).

Using third-party tools Most projects need some third-party tools. We can either expect them to be already installed or compile them in our private GOPATH. For example, here is the lint rule:
BIN    = $(GOPATH)/bin
GOLINT = $(BIN)/golint
$(BIN)/golint:   $(BASE) #  
    go get github.com/golang/lint/golint
.PHONY: lint
lint: vendor   $(BASE) $(GOLINT) #  
    @cd $(BASE) && ret=0 && for pkg in $(PKGS); do \
        test -z "$$($(GOLINT) $$pkg   tee /dev/stderr)"   ret=1 ; \
     done ; exit $$ret
As for glide, we let the user a chance to override which golint executable to use. By default, it uses a private copy. But a user can use its own copy with make GOLINT=/usr/bin/golint. In , we have the recipe to build the private copy. We simply issue go get3 to download and build golint. In , the lint rule executes golint on each package contained in the $(PKGS) variable. We ll explain this variable in the next section.

Working with non-vendored packages only Some commands need to be provided with a list of packages. Because we use a vendor/ directory, the shortcut ./... is not what we expect as we don t want to run tests on our dependencies4. Therefore, we compose a list of packages we care about:
PKGS = $(or $(PKG), $(shell cd $(BASE) && \
    env GOPATH=$(GOPATH) $(GO) list ./...   grep -v "^$(PACKAGE)/vendor/"))
If the user has provided the $(PKG) variable, we use it. For example, if they want to lint only the cmd package, they can invoke make lint PKG=hellogopher/cmd which is more intuitive than specifying PKGS. Otherwise, we just execute go list ./... but we remove anything from the vendor directory.

Tests Here are some rules to run tests:
TIMEOUT = 20
TEST_TARGETS := test-default test-bench test-short test-verbose test-race
.PHONY: $(TEST_TARGETS) check test tests
test-bench:   ARGS=-run=__absolutelynothing__ -bench=.
test-short:   ARGS=-short
test-verbose: ARGS=-v
test-race:    ARGS=-race
$(TEST_TARGETS): test
check test tests: fmt lint vendor   $(BASE)
    @cd $(BASE) && $(GO) test -timeout $(TIMEOUT)s $(ARGS) $(PKGS)
A user can invoke tests in different ways:
  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.

Tests coverage go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you ll need some additional tools.
COVERAGE_MODE    = atomic
COVERAGE_PROFILE = $(COVERAGE_DIR)/profile.out
COVERAGE_XML     = $(COVERAGE_DIR)/coverage.xml
COVERAGE_HTML    = $(COVERAGE_DIR)/index.html
.PHONY: test-coverage test-coverage-tools
test-coverage-tools:   $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) #  
test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -Iseconds)
test-coverage: fmt lint vendor test-coverage-tools   $(BASE)
    @mkdir -p $(COVERAGE_DIR)/coverage
    @cd $(BASE) && for pkg in $(PKGS); do \ #  
        $(GO) test \
            -coverpkg=$$($(GO) list -f '  join .Deps "\n"  ' $$pkg   \
                    grep '^$(PACKAGE)/'   grep -v '^$(PACKAGE)/vendor/'   \
                    tr '\n' ',')$$pkg \
            -covermode=$(COVERAGE_MODE) \
            -coverprofile="$(COVERAGE_DIR)/coverage/ echo $$pkg   tr "/" "-" .cover" $$pkg ;\
     done
    @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE)
    @$(GO) tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML)
    @$(GOCOV) convert $(COVERAGE_PROFILE)   $(GOCOVXML) > $(COVERAGE_XML)
First, we define some variables to let the user override them. We also require the following tools (in ):
  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.
The rules to build those tools are similar to the rule for golint described a few sections ago. In , for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Final result While the main goal of using a Makefile was to work around GOPATH, it s also a good place to hide the complexity of some operations, notably around test coverage. The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks!

  1. In Go, vendoring is about both bundling and dependency management. As the Go ecosystem matures, the bundling part (fixed snapshots of dependencies) may become optional but the vendor/ directory may stay for dependency management (retrieval of the latest versions of dependencies matching a set of constraints).
  2. If you don t want to automatically update glide.lock when a change is detected in glide.yaml, rename the target to deps-update and make it a phony target.
  3. There is some irony for bad mouthing go get and then immediately use it because it is convenient.
  4. I think ./... should not include the vendor/ directory by default. Dependencies should be trusted to have run their own tests in the environment they expect them to succeed. Unfortunately, this is unlikely to change.

12 January 2017

Ritesh Raj Sarraf: Laptop Mode Tools 1.71

I am pleased to announce the 1.71 release of Laptop Mode Tools. This release includes some new modules, some bug fixes, and there are some efficiency improvements too. Many thanks to our users; most changes in this release are contributions from our users. A filtered list of changes in mentioned below. For the full log, please refer to the git repository. Source tarball, Feodra/SUSE RPM Packages available at:
https://github.com/rickysarraf/laptop-mode-tools/releases Debian packages will be available soon in Unstable. Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools
1.71 - Thu Jan 12 13:30:50 IST 2017
    * Fix incorrect import of os.putenv
    * Merge pull request #74 from Coucouf/fix-os-putenv
    * Fix documentation on where we read battery capacity from
    * cpuhotplug: allow disabling specific cpus
    * Merge pull request #78 from aartamonau/cpuhotplug
    * runtime-pm: refactor listed_by_id()
    * wireless-power: Use iw and fallback to iwconfig if it not available
    * Prefer available AC supply information over battery state to determine ON_AC
    * On startup, we want to force the full execution of LMT.
    * Device hotplugs need a forced execution for LMT to apply the proper settings
    * runtime-pm: Refactor list_by_type()
    * kbd-backlight: New module to control keyboard backlight brightness
    * Include Transmit power saving in wireless cards
    * Don't run in a subshell
    * Try harder to check battery charge
    * New module: vgaswitcheroo
    * Revive bluetooth module. Use rfkill primarily. Also don't unload (incomplete list of) kernel modules
What is Laptop Mode Tools
Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

Categories:

Keywords:

Like:

2 January 2017

Santiago Garc a Manti n: ScreenLock on Jessie's systemd

Something I was used to and which came as standard on wheezy if you installed acpi-support was screen locking when you where suspending, hibernating, ... This is something that I still haven't found on Jessie and which somebody had point me to solve via /lib/systemd/system-sleep/whatever hacking, but that didn't seem quite right, so I gave it a look again and this time I was able to add some config files at /etc/systemd and then a script which does what acpi-support used to do before Edit: Michael Biebl has sugested on my google+ post that this is an ugly hack and that one shouldn't use this solution and instead what we should use are solutions with direct support for logind like desktops with built in support or xss-lock, the reasons for this being ugly are pointed at this bugEdit (2): I've just done the recommended thing for LXDE but it should be similar for any other desktop or window manager lacking logind integration, you just need to apt-get install xss-lock and then add @xss-lock -- xscreensaver-command --lock to .config/lxsession/LXDE/autostart or do it through lxsession-default-apps on the autostart tab. Oh, btw, you don't need acpid or the acpi-support* packages with this setup, so you can remove them safely and avoid weird things. The main thing here is this little config file: /etc/systemd/system/screenlock.service [Unit] Description=Lock X session Before=sleep.target [Service] Type=oneshot ExecStart=/usr/local/sbin/screenlock.sh [Install] WantedBy=sleep.target This config file is activated by running: systemctl enable screenlockAs you can see that config file calls /usr/local/sbin/screenlock.sh which is this little script: #!/bin/sh # This depends on acpi-support being installed # and on /etc/systemd/system/screenlock.service # which is enabled with: systemctl enable screenlock test -f /usr/share/acpi-support/state-funcs exit 0 . /etc/default/acpi-support . /usr/share/acpi-support/power-funcs if [ x$LOCK_SCREEN = xtrue ]; then . /usr/share/acpi-support/screenblank fi The script of course needs execution permissions. I tend to combine this with my power button making the machine hibernate, which was also easier to do before and which is now done at /etc/systemd/logind.conf (doesn't the name already tell you?) where you have to set: HandlePowerKey=hibernateAnd that's all.

22 December 2016

Shirish Agarwal: My letter to Government of Maharashtra on Real Estate Rules and Regulation Draft rules

While I try to minimize Politics and Economics as much as I can on this blog, it sometimes surfaces. It is possible that some people may benefit or at least be aware. A bit of background is necessary before I jump into the intricacies of the Maharashtra Real Estate Rules and Regulation Draft Rules 2016 (RERA) . Since ever, but more prominently since 2007/8 potential homeowners from across the country have been suffering at the hands of the builder/promoter for number of years. While it would be wrong to paint all the Real Estate Developers and Builders as cheats (we as in all tenants and homeowners hope there are good ones out there) many Real Estate Builders and promoters have cheated homeowners of their hard-earned money. This has also lessened the secondary (resale) market and tenants like me have to fight over morsels as supply is tight. There were two broad ways in which the cheating is/was done a. Take deposits and run away i.e. fly by night operators Here the only option for a homeowner is to file an FIR (First Information Report) and hope the culprits are caught. 99% of the time the builder/promoter goes somewhere abroad and the potential home buyers/home-owners are left holding the can. This is usually done by small real estate promoters and builders. b. The big boys would take all or most money of the project, may register or not register the flat in your name, either build a quarter or half-finished building and then make excuses. There are some who do not even build. The money given is used by the builder/developer either for his own needs or using that money in some high-profile project which is expensive and may have huge returns. They know that home-owners can t do anything, at the most go to the court which will take more than a decade or two during which time the developer would have interest-free income and do whatever he wants to do. One of the bigger stories which came up this year was when the Indian Cricket Captain, M.S. Dhoni (cricket is a religion in India, and the cricketers gods for millions of Indians) had to end his brand engagement and ambassadorship from Amrapali Housing Group. Apparently, his wife Sakshi was on the Board of directors at Amrapali Housing and had to resign The Government knew of such issues and had been working since last few years. Under the present Government, a Model Agreement and a Model Real Estate Rules and Regulation Bill was passed on 31st March and came into force on 1st May 2016. India, similar to the U.S. and U.K. follows a federal structure. While I have shared this before, most of the laws in India fall in either of three lists, Central List, Concurrent Lists and State Lists. Housing for instance, is a state subject so any laws concerning housing has to be made by the state legislature. Having a statutory requirement to put the bill in 6 months from 1st of May, the Government of Maharashtra chose to put the draft rules in public domain on 12th December 2016, about 10 days ago and there were efforts to let it remain low-key so people do not object as we are still in the throes of demonetisation. By law they should have given 30 days for people to raise objections and give suggestions. The State Government too could have easily asked an extension and as both the State and the Centre are of the same Political Party they would have easily got it. With that, below is the e-mail I sent to suggesstionsonrera@maharashtra.gov.in Sub Some suggestions for RERA biggest suggestion, need to give more time study the implications for house-owners. Respected Sir/Madame, I will be publishing the below mail as a public letter on my blog as well. I am writing as a citizen, a voter, a potential home owner, currently a tenant . If houses supply is not in time, it is us, the tenants who have the most to lose as we have to fight over whatever is in the market. I do also hope to be a home buyer at some point in time so these rules would affect me also somewhere in the hazy future. I came to know through the media that Maharashtra Govt. recently introduced draft rules for RERA Real Estate (Regulation and Development) Act, 2016 . I hope to impress upon you that these proposed Rules and Regulations need to be thoroughly revised and new draft rules shared with the public at large with proper announcement in all newspapers and proper time ( more than a month ) to study and give replies on the said matter. My suggestions and complaints are as under a. The first complaint and suggestion is that the date between the draft regulations and suggestions being invited by members of public is and was too little 12 December 2016 23 December 2016 (only 11 days) for almost 90 pages of Government rules and regulations which needs multiple rounds of re-reading to understand the implications of the draft rules . Add to that unlike the Central Building Legislation, Model Agreement which was prepared by Centre and also given wide publicity, the Maharashtra Govt. didn t do any such publicity to bring it to the
notice of the people. b. I ask where was the hurry to publish these draft rules now when everybody is suffering through the result of cash-crunch on top of other things. If the said draft rules were put up in January 2017, I am sure more people would have responded to the draft rules. Ir raises suspicion in the mind of everybody the timing of sharing the draft rules and the limited time given to people to respond. E.g. When TRAI (Telephone Regulatory Authority of India) asked for suggestion it gives more than a month, and something like housing which is an existential question for everybody, i.e. the poor, the middle and the rich, you have given pretty less time. While I could change my telephone service providers at a moment s notice without huge loss, the same cannot be said either for a house owner (in case of builder) or a tenant as well. This is just not done. c. The documents are at https://housing.maharashtra.gov.in/sitemap/housing/Rera_rules.htm under different sub-headings while the correct structure of the documents can be found at nared s site
http://naredco.in/notifications.asp . At the very least, the documents should have been in proper order. Coming to some of the salient points raised both in the media and elsewhere 1. On page 6 of Part IV-A Ext1.pdf you have written Explanation.-The registration of a real estate project shall not be required,- (i) for the purpose of any renovations or repair or redevelopment which does not involve marketing, advertisement, selling or new allotment of any apartment , plot or building as the case may be under
the real estate project; RERA draft rules What it means is that the house owner and by the same stroke the tenant would have absolutely no voice to oppose any changes made to the structure at any point of time after the building is built. This means the builder is free to build 12-14-16 even 20 stories building when the original plans were for 6-8-10. This rule gives the builder to do free for all till the building doesn t get converted into a society, a process which does and can take years to happen. 2. A builder has to take innumerable permissions from different authorities at each and every stage till possession of a said property isn t handed over to a home buyer and by its extension to the tenant. Now any one of these authorities could sit on the papers and there is no accountability of by when papers would be passed under a competent authority s desk. There was a wide belief that there would be some
rules and regulations framed in this regard but the draft rules are silent on the subject matter. 3. In Draft rule 5. page 8 of Part IV-A Ext1.pdf you write Withdrawal of amounts deposited in separate account.-(1) With regard to the withdrawal of amounts deposited under sub-clause (D) of clause (l) of sub-section (2) of section 4, the following provisions shall apply:- (i) For new projects which will be registered after commencement. Deposit in the escrow account is from now onwards. So what happens to the projects which are ongoing at the moment, either at the registration stage or at building stage, thousands of potential house owners would be left to fend for themselves. There needs to be some recourse for them as well. 3b. Another suggestion is that the house-owners are duly informed when promoters/builders are taking money from the bank and should have the authority to see that proper documents and procedure was followed. It is possible that unscrupulous elements may either bypass it or give some different documents which are not in knowledge of the house-owner, thus defeating the purpose of the escrow account itself. 4. On page 44 of Pt.IV-A Ext.161 in the Model Agreement to be entered
between the Promoter and the Alottee you have mentioned (i)The Allottee hereby agrees to purchase from the Promoter and the Promoter hereby agrees to sell to the Allottee one Apartment No. .. of the type .. of carpet area admeasuring .. sq. metres on floor in the building __________along with (hereinafter referred to as the Apartment ) as shown in the Floor plan thereof hereto annexed and marked Annexures C
for the consideration of Rs. . including Rs. . being the proportionate price of the common areas and facilities appurtenant to the premises, the nature, extent and description of the common/limited common areas and facilities which are more particularly described in the Second Schedule annexed herewith. (the price of the Apartment including the proportionate price of the limited common areas and facilities and parking spaces should be shown separately). (ii) The Allottee hereby agrees to purchase from the Promoter and the Promoter hereby agrees to sell to the Allottee garage bearing Nos ____ situated at _______ Basement and/or stilt and /or ____podium being
constructed in the layout for the consideration of Rs. ____________/- (iii) The Allottee hereby agrees to purchase from the Promoter and the Promoter hereby agrees to sell to the Allottee Car parking spaces bearing Nos ____ situated at _______ Basement and/or stilt and /or ____podium and/or open parking space, being constructed in the layout for the
consideration of Rs. ____________/-. The total aggregate consideration amount for the apartment including garages/car parking spaces is
thus Rs.______/- Draft rules. What has been done here is the parking space has been divorced from sale of the flat . It is against natural justice, logic, common sense as well-known precedents in jurisprudence (i.e. law) In September 2010, the bench of Justices R M Lodha and A K Patnaik had ruled in a judgement stating developers cannot sell parking spaces as independent real-estate units. The court ruled that parking areas are common areas and facilities . This was on behalf of a precedent in Mumbai High Court as well. http://www.reinventingparking.org/2010/09/important-parking-ruling-by-indias.html This has been reiterated again and again in courts as well as consumer
forums http://timesofindia.indiatimes.com/city/mumbai/Cant-charge-flat-buyer-extra-for-parking-slot/articleshow/22475233.cms and has been the norm in several Apartment Acts over multiple states http://apartmentadda.com/blog/2015/02/19/rules-pertaining-to-parking-spaces-in-apartment-complexes/ 5. In case of dispute, the case will high court which is inundated by huge number of pending cases. As recently as August 2016 there was a news item in Indian Express which talks about the spike in pending cases. Putting a case in the high court will weigh heavily on the homeowner, financially and
mentally http://indianexpress.com/article/cities/mumbai/more-cases-and-increased-staff-strength-putting-pressure-on-bombay-high-court-building-2964796/ It may be better to use the services of National Consumer Disputes Redressal Commission'(NCDRC) where there is possibility of quicker justice and quick resolution. There is possibility of group actions taking place which will reduce duplicity of work on behalf of the petitioners. 6. There is neither any clarity, incentive or punitive action against the promoter/builder if s/he delay conveyance to the society in order to get any future developmental and FSI rights. To delay handing over conveyance, the builders delay completion of the last building in a said project. there should be both a compensatory and punitive actions taken against the builder if he is unable to prove any genuine cause for the same. 7. There needs to be the provision with regard to need for developers to make public disclosures pertaining to building approvals. This while I had shared above needs to be explicitly mentioned so house-owners know the promoter/builder are on the right path. 8. There needs to be a provision that prohibits refusal to sell property to any person on the basis of his/her religion, marital status or dietary preferences. 9. There is lot of ambiguity if criminal proceedings can be initiated against a promoter/developer if s/he fails to deliver the flat on time. The developer should be criminally liable if he doesn t give the flat with all the amenities, fixtures and anything which was on agreement signed by both parties and for which the payment has been given in
full at time of possession of a flat. 10. Penalties for the promoter/builder is capped at 10% in case of any wrong-doing. Apart from proving the charge, the onus of which would lie on the house-owner, capping it at 10% is similar to A teacher telling a naughty student, do whatever you want to do, I am only going to hit you 5 times. Such a drafting encourages the Promoter/builder to play mischief. The builder knows his exposure is pretty limited. Liability is limited so he will try to get with whatever he can. Having a high penalty clause will deter him. 11. There was talk and shown in the Center s model agreement the precedent of providing names, addresses and contact details of other allot-tees or home-owners of a building that would have multiple dwelling units . This is nowhere either in the agreement or mentioned anywhere else in the four documents. 12. An addition to the above would be that the details provided should be correct and updated as per the records maintained by the Promoter/builder. 13. Today, there is no way for a potential house-owner to know if the builder had broken any norms or has any cases in court pending against him. There should be a way for the potential house-owner to find out. 14. A builder can terminate a flat purchase agreement by giving just a week s notice on email to the buyer who defaults on an instalment. But the developer can refund the money without interest to the
purchaser at leisure, within six months.Under MOFA (the earlier rules), the developer could cancel the agreement after giving a 15 days notice, and the builder could resell the flat only after refunding money to the original buyer. Under the new draft rules, a builder can immediately sell the flat after terminating the agreement. 15. The new draft rules say a buyer must pay 30% of the total cost while signing the agreement and 45% when the plinth of the building is constructed. The earlier state law stipulated 20% payment when the
agreement is signed with the developer. 16. The Central model agreement and rules proposed a fee of INR Rs 1,000 for filing complaints before housing authority; the state draft has proposed to hike this fee to Rs INR Rs. 10,000/- 17. Reading the Central Model Agreement, key disclosures under Section 4 (2)and Rule 3 (2) of the Central Model Rules have been excluded to be put up on the website of the Authority. These included carpet area of flat, encumbrance certificate (this would have disclosed encumbrances in respect of the land where the real estate project is proposed to be undertaken), copy of the legal title report and sanctioned plan of the building. Due to this house-owner would always be in dark and assume that everything is alright. There have been multiple instances of this over years Some examples http://www.deccanchronicle.com/140920/nation-current-affairs/article/builder-encroaches-%E2%80%98raja-kaluve%E2%80%99 http://indianexpress.com/article/cities/ahmedabad/surat-builder-grabs-tribal-land-using-fake-documents/ http://www.thehindu.com/news/cities/bangalore/bmtf-books-exmayor-wife-for-grabbing-ca-site/article7397062.ece http://timesofindia.indiatimes.com/city/thane/24-acre-ambernath-plot-usurped-with-fake-docus/articleshow/55654139.cms 18. The Central rule requires a builder to submit an annual report including profit and loss account, balance sheet, cash flow statement, directors report and auditors report for the preceding three financial years, among other things. However, the Maharashtra draft rules are silent on such a requirement. While the above is what I could perceive in the limited amount I came to know. This should be enough to convince that more needs to be done from the house-owner s side. Update Just saw Quint s Op-Ed goes in more detail.
Filed under: Miscellenous Tagged: #Draft Rules for Real Estate Rules and Regulation (2016), #hurry, #Name, #Response, Amrapali Group, Contact details of other hom-owners in a scheme., M.S. Dhoni

4 December 2016

Ben Hutchings: Linux Kernel Summit 2016, part 2

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here. Kernel Hardening Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP. GCC plugins The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point. There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet. Probabilistic protections These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely. Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel. Page and heap allocation, etc., is still quite predictable. struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers. Deterministic protections These protections block a class of attacks completely. Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.) Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake onward) and on arm64 (PXN, from ARMv8.1). But Skylake is not available for servers and ARMv8.1 is not yet implemented at all! s390 always had this protection. It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough. Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.) Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.) Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.) Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_ open,close _kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet. Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet. Kernel Freezer Hell For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere. Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)). At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by: The system suspend code should not need to directly freeze threads. Kernel Documentation Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation. The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx. There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders. Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream. There is lots of obsolete documentation, and patches to remove those would be welcome. Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time. Issues around Tracepoints Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically. There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability. Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them? Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).

3 December 2016

Vincent Bernat: Build-time dependency patching for Android

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.
buildscript  
    dependencies  
        classpath 'com.android.tools.build:gradle:2.2.+'
        classpath 'com.android.tools.build:transform-api:1.5.+'
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'
     
 
Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context This section adds some context to the example. Feel free to skip it. Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1. Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:
Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.
The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2. Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement Let s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:
// In SslUtil class
public static boolean shouldDenyRequest(int error)  
    return false;
 

Transform registration Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:
import com.android.build.api.transform.Context
import com.android.build.api.transform.QualifiedContent
import com.android.build.api.transform.Transform
import com.android.build.api.transform.TransformException
import com.android.build.api.transform.TransformInput
import com.android.build.api.transform.TransformOutputProvider
import org.gradle.api.logging.Logger
class PatchXWalkTransform extends Transform  
    Logger logger = null;
    public PatchXWalkTransform(Logger logger)  
        this.logger = logger
     
    @Override
    String getName()  
        return "PatchXWalk"
     
    @Override
    Set<QualifiedContent.ContentType> getInputTypes()  
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)
     
    @Override
    Set<QualifiedContent.Scope> getScopes()  
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)
     
    @Override
    boolean isIncremental()  
        return true
     
    @Override
    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException  
        // We should do something here
     
 
// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))
The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources. The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It s also possible to transform our own classes. The isIncremental() method returns true because we support incremental builds. The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform To keep all external dependencies unmodified, we must copy them:
@Override
void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException  
    inputs.each  
        it.jarInputs.each  
            def jarName = it.name
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
                                                         Format.JAR);
            def status = it.getStatus()
            if (status == Status.REMOVED)   //  
                logger.info("Remove $ src ")
                FileUtils.delete(dest)
              else if (!isIncremental   status != Status.NOTCHANGED)   //  
                logger.info("Copy $ src ")
                FileUtils.copyFile(src, dest)
             
         
     
 
We also need two additional imports:
import com.android.build.api.transform.Status
import org.apache.commons.io.FileUtils
Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed ( ) or it has been modified ( ). In all other cases, we can safely assume the file is already correctly copied.

JAR patching When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ):
if ("$ src " ==~ ".*/org.xwalk/xwalk_core.*/classes.jar")  
    def pool = new ClassPool()
    pool.insertClassPath("$ src ")
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') //  
    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) //  
    ctc.addMethod(CtNewMethod.make("""
public static boolean shouldDenyRequest(int error)  
    return false;
 
""", ctc)) //  
    def sslUtilBytecode = ctc.toBytecode() //  
    // Write back the JAR file
    //  
  else  
    logger.info("Copy $ src ")
    FileUtils.copyFile(src, dest)
 
We also need the following additional imports to use Javassist:
import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod
Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in ( ). We locate the appropriate method and delete it ( ). Then, we add our custom method using the same name ( ). The whole operation is done in memory. We retrieve the bytecode of the modified class in . The remaining step is to rebuild the JAR file:
def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))
//  
input.entries().each  
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class"))  
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)
        s.close()
     
 
//  
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))
output.write(sslUtilBytecode)
output.close()
We need the following additional imports:
import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream
import org.apache.commons.io.IOUtils
There are two steps. In , all classes are copied to the new JAR, except the SslUtil class. In , the modified bytecode for SslUtil is added to the JAR. That s all! You can view the complete example on GitHub.

More complex method replacement In the above example, the new method doesn t use any external dependency. Let s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:
import org.chromium.net.NetError;
import android.net.http.SslCertificate;
import android.net.http.SslError;
// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url)  
    switch(error)  
        case NetError.ERR_CERT_COMMON_NAME_INVALID:
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
        case NetError.ERR_CERT_AUTHORITY_INVALID:
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
        default:
            break;
     
    return new SslError(SslError.SSL_INVALID, cert, url);
 
The major difference with the previous example is that we need to import some additional classes.

Android SDK import The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:
androidJar = "$ android.getSdkDirectory().getAbsolutePath() /platforms/" +
             "$ android.getCompileSdkVersion() /android.jar"
We need to load it before adding the new method into SslUtil class:
def pool = new ClassPool()
pool.insertClassPath(androidJar)
pool.insertClassPath("$ src ")
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
//  

External dependency import We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.
def pool = new ClassPool()
pool.insertClassPath(androidJar)
inputs.each  
    it.jarInputs.each  
        def jarName = it.name
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED)  
            pool.insertClassPath("$ src ")
         
     
 
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
pool.importPackage('org.chromium.net.NetError');
ctc.addMethod(CtNewMethod.make(" "))
// Then, rebuild the JAR...
Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on.
  2. I hope to have this fixed in later versions.
  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk.
  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources.

5 October 2016

Kees Cook: security things in Linux v4.8

Previously: v4.7. Here are a bunch of security things I m excited about in Linux v4.8: SLUB freelist ASLR Thomas Garnier continued his freelist randomization work by adding SLUB support. x86_64 KASLR text base offset physical/virtual decoupling On x86_64, to implement the KASLR text base offset, the physical memory location of the kernel was randomized, which resulted in the virtual address being offset as well. Due to how the kernel s -2GB addressing works (gcc s -mcmodel=kernel ), it wasn t possible to randomize the physical location beyond the 2GB limit, leaving any additional physical memory unused as a randomization target. In order to decouple the physical and virtual location of the kernel (to make physical address exposures less valuable to attackers), the physical location of the kernel needed to be randomized separately from the virtual location. This required a lot of work for handling very large addresses spanning terabytes of address space. Yinghai Lu, Baoquan He, and I landed a series of patches that ultimately did this (and in the process fixed some other bugs too). This expands the physical offset entropy to roughly $physical_memory_size_of_system / 2MB bits. x86_64 KASLR memory base offset Thomas Garnier rolled out KASLR to the kernel s various statically located memory ranges, randomizing their locations with CONFIG_RANDOMIZE_MEMORY. One of the more notable things randomized is the physical memory mapping, which is a known target for attacks. Also randomized is the vmalloc area, which makes attacks against targets vmalloced during boot (which tend to always end up in the same location on a given system) are now harder to locate. (The vmemmap region randomization accidentally missed the v4.8 window and will appear in v4.9.) x86_64 KASLR with hibernation Rafael Wysocki (with Thomas Garnier, Borislav Petkov, Yinghai Lu, Logan Gunthorpe, and myself) worked on a number of fixes to hibernation code that, even without KASLR, were coincidentally exposed by the earlier W^X fix. With that original problem fixed, then memory KASLR exposed more problems. I m very grateful everyone was able to help out fixing these, especially Rafael and Thomas. It s a hard place to debug. The bottom line, now, is that hibernation and KASLR are no longer mutually exclusive. gcc plugin infrastructure Emese Revfy ported the PaX/Grsecurity gcc plugin infrastructure to upstream. If you want to perform compiler-based magic on kernel builds, now it s much easier with CONFIG_GCC_PLUGINS! The plugins live in scripts/gcc-plugins/. Current plugins are a short example called Cyclic Complexity which just emits the complexity of functions as they re compiled, and Sanitizer Coverage which provides the same functionality as gcc s recent -fsanitize-coverage=trace-pc but back through gcc 4.5. Another notable detail about this work is that it was the first Linux kernel security work funded by Linux Foundation s Core Infrastructure Initiative. I m looking forward to more plugins! If you re on Debian or Ubuntu, the required gcc plugin headers are available via the gcc-$N-plugin-dev package (and similarly for all cross-compiler packages). hardened usercopy Along with work from Rik van Riel, Laura Abbott, Casey Schaufler, and many other folks doing testing on the KSPP mailing list, I ported part of PAX_USERCOPY (the basic runtime bounds checking) to upstream as CONFIG_HARDENED_USERCOPY. One of the interface boundaries between the kernel and user-space are the copy_to_user()/copy_from_user() family of functions. Frequently, the size of a copy is known at compile-time ( built-in constant ), so there s not much benefit in checking those sizes (hardened usercopy avoids these cases). In the case of dynamic sizes, hardened usercopy checks for 3 areas of memory: slab allocations, stack allocations, and kernel text. Direct kernel text copying is simply disallowed. Stack copying is allowed as long as it is entirely contained by the current stack memory range (and on x86, only if it does not include the saved stack frame and instruction pointers). For slab allocations (e.g. those allocated through kmem_cache_alloc() and the kmalloc()-family of functions), the copy size is compared against the size of the object being copied. For example, if copy_from_user() is writing to a structure that was allocated as size 64, but the copy gets tricked into trying to write 65 bytes, hardened usercopy will catch it and kill the process. For testing hardened usercopy, lkdtm gained several new tests: USERCOPY_HEAP_SIZE_TO, USERCOPY_HEAP_SIZE_FROM, USERCOPY_STACK_FRAME_TO,
USERCOPY_STACK_FRAME_FROM, USERCOPY_STACK_BEYOND, and USERCOPY_KERNEL. Additionally, USERCOPY_HEAP_FLAG_TO and USERCOPY_HEAP_FLAG_FROM were added to test what will be coming next for hardened usercopy: flagging slab memory as safe for copy to/from user-space , effectively whitelisting certainly slab caches, as done by PAX_USERCOPY. This further reduces the scope of what s allowed to be copied to/from, since most kernel memory is not intended to ever be exposed to user-space. Adding this logic will require some reorganization of usercopy code to add some new APIs, as PAX_USERCOPY s approach to handling special-cases is to add bounce-copies (copy from slab to stack, then copy to userspace) as needed, which is unlikely to be acceptable upstream. seccomp reordered after ptrace By its original design, seccomp filtering happened before ptrace so that seccomp-based ptracers (i.e. SECCOMP_RET_TRACE) could explicitly bypass seccomp filtering and force a desired syscall. Nothing actually used this feature, and as it turns out, it s not compatible with process launchers that install seccomp filters (e.g. systemd, lxc) since as long as the ptrace and fork syscalls are allowed (and fork is needed for any sensible container environment), a process could spawn a tracer to help bypass a filter by injecting syscalls. After Andy Lutomirski convinced me that ordering ptrace first does not change the attack surface of a running process (unless all syscalls are blacklisted, the entire ptrace attack surface will always be exposed), I rearranged things. Now there is no (expected) way to bypass seccomp filters, and containers with seccomp filters can allow ptrace again. That s it for v4.8! The merge window is open for v4.9

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

1 August 2016

Chris Lamb: Free software activities in July 2016

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):



Debian
  • Created a proof-of-concept wrapper for pymysql to reduce the diff between Ubuntu and Debian's packaging of python-django. (tree)
  • Improved the NEW queue HTML report to display absolute timestamps when placing the cursor over relative times as well as to tidy the underlying HTML generation.
  • Tidied and pushed for the adoption of a patch against dak to also send mails to the signer of an uploaded package on security-master. (#796784)

LTS

This month I have been paid to work 14 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Improved the bin/lts-cve-triage.py script to ignore packages that have been marked as unsupported.
  • Improved the bin/contact-maintainers script to print a nicer error message if you mistype the package name.
  • Issued the following advisories:
    • DLA 541-1 for libvirt making the password policy consistent across the QEMU and VNC backends with respect to empty passwords.
    • DLA 574-1 for graphicsmagick fixing two denial-of-service vulnerabilities.
    • DLA 548-1 and DLA 550-1 for drupal7 fixing an open HTTP redirect vulnerability and a privilege escalation issue respectfully.
    • DLA 557-1 for dietlibc removing the current directory from the current path.
    • DLA 577-1 for redis preventing the redis-cli tool creating world-readable history files.

Uploads
  • redis:
    • 3.2.1-2 Avoiding race conditions in upstream test suite.
    • 3.2.1-3 Correcting world_readable ~/.rediscli_history files.
    • 3.2.1-4 Preventing a race condition in the previous upload's patch.
    • 3.2.2-1 New upstream release.
    • 3.2.1-4~bpo8+1 Backport to jessie-backports.
  • strip-nondeterminism:
    • 0.020-1 Improved the PNG handler to not blindly trust chunk sizes, rewriting most of the existing code.
    • 0.021-1 Correcting a regression in the PNG handler where it would leave temporary files in the generated binaries.
    • 0.022-1 Correcting a further regression in the PNG handler with respect to IEND chunk detection.
  • python-redis (2.10.5-1~bpo8+1) Backport to jessie-backports.
  • reprotest (0.2) Sponsored upload.

Patches contributed


I submitted patches to fix faulty initscripts in lm-sensors, rsync, sane-backends & vsftpd.

In addition, I submitted 7 patches to fix typos in debian/rules against cme:, gnugk: incorrect reference to dh_install_init, php-sql-formatter, python-django-crispy-forms, libhook-lexwrap-perl, mknbi & ruby-unf-ext.

I also submitted 6 patches to fix reproducible toolchain issues (ie. ensuring the output is reproducible rather than the package itself) against libextutils-parsexs-perl: Please make the output reproducible, perl, naturaldocs, python-docutils, ruby-ronn & txt2tags.

Lastly, I submitted 65 patches to fix specific reproducibility issues in amanda, boolector, borgbackup, cc1111, cfingerd, check-all-the-things, cobbler, ctop, cvs2svn, eb, eurephia, ezstream, feh, fonts-noto, fspy, ftplib, fvwm, gearmand, gngb, golang-github-miekg-pkcs11, gpick, gretl, hibernate, hmmer, hocr, idjc, ifmail, ironic, irsim, lacheck, libmemcached-libmemcached-perl, libmongoc, libwebsockets, minidlna, mknbi, nbc, neat, nfstrace, nmh, ntopng, pagekite, pavuk, proftpd-dfsg, pxlib, pysal, python-kinterbasdb, python-mkdocs, sa-exim, speech-tools, stressapptest, tcpflow, tcpreen, ui-auto, uisp, uswsusp, vtun, vtwm, why3, wit, wordgrinder, xloadimage, xmlcopyeditor, xorp, xserver-xorg-video-openchrome & yersinia.

RC bugs

I also filed 68 RC bugs for packages that access the internet during build against betamax, curl, django-localflavor, django-polymorphic, dnspython, docker-registry, elasticsearch-curator, elib.intl, elib.intl, elib.intl, fabulous, flask-restful, flask-restful, flask-restful, foolscap, gnucash-docs, golang-github-azure-go-autorest, golang-github-fluent-fluent-logger-golang, golang-github-franela-goreq, golang-github-mesos-mesos-go, golang-github-shopify-sarama, golang-github-unknwon-com, golang-github-xeipuuv-gojsonschema, htsjdk, lemonldap-ng, libanyevent-http-perl, libcommons-codec-java, libfurl-perl, libgravatar-url-perl, libgravatar-url-perl, libgravatar-url-perl, libgravatar-url-perl, libgravatar-url-perl, libhttp-async-perl, libhttp-oai-perl, libhttp-proxy-perl, libpoe-component-client-http-perl, libuv, libuv1, licenseutils, licenseutils, licenseutils, musicbrainzngs, node-oauth, node-redis, nodejs, pycurl, pytest, python-aiohttp, python-asyncssh, python-future, python-guacamole, python-latexcodec, python-pysnmp4, python-qtawesome, python-simpy, python-social-auth, python-structlog, python-sunlight, python-webob, python-werkzeug, python-ws4py, testpath, traitlets, urlgrabber, varnish-modules, webtest & zurl.


Finally, I filed 100 FTBFS bugs against abind, backup-manager, boot, bzr-git, cfengine3, chron, cloud-sptheme, cookiecutter, date, django-uwsgi, djangorestframework, docker-swarm, ekg2, evil-el, fasianoptions, fassets, fastinfoset, fest-assert, fimport, ftrading, gdnsd, ghc-testsuite, golang-github-magiconair-properties, golang-github-mattn-go-shellwords, golang-github-mitchellh-go-homedir, gplots, gregmisc, highlight.js, influxdb, jersey1, jflex, jhdf, kimwitu, libapache-htpasswd-perl, libconfig-model-itself-perl, libhtml-tidy-perl, liblinux-prctl-perl, libmoox-options-perl, libmousex-getopt-perl, libparanamer-java, librevenge, libvirt-python, license-reconcile, louie, mako, mate-indicator-applet, maven-compiler-plugin, mgt, mgt, mgt, misc3d, mnormt, nbd, ngetty, node-xmpp, nomad, perforate, pyoperators, pyqi, python-activipy, python-bioblend, python-cement, python-gevent, python-pydot-ng, python-requests-toolbelt, python-ruffus, python-scrapy, r-cran-digest, r-cran-getopt, r-cran-lpsolve, r-cran-rms, r-cran-timedate, resteasy, ruby-berkshelf-api-client, ruby-fog-libvirt, ruby-grape-msgpack, ruby-jquery-rails, ruby-kramdown-rfc2629, ruby-moneta, ruby-parser, ruby-puppet-forge, ruby-rbvmomi, ruby-redis-actionpack, ruby-unindent, ruby-web-console, scalapack-doc, scannotation, snow, sorl-thumbnail, svgwrite, systemd-docker, tiles-request, torcs, utf8proc, vagrant-libvirt, voms-api-java, wcwidth, xdffileio, xmlgraphics-commons & yorick.

FTP Team

As a Debian FTP assistant I ACCEPTed 114 packages: apertium-isl-eng, apertium-mk-bg, apertium-urd-hin, apprecommender, auto-apt-proxy, beast-mcmc, caffe, caffe-contrib, debian-edu, dh-make-perl, django-notification, dpkg-cross, elisp-slime-nav, evil-el, fig2dev, file, flightgear-phi, friendly-recovery, fwupd, gcc-5-cross, gdbm, gnustep-gui, golang-github-cznic-lldb, golang-github-dghubble-sling, golang-github-docker-leadership, golang-github-rogpeppe-fastuuid, golang-github-skarademir-naturalsort, golang-glide, gtk+2.0, gtranscribe, kdepim4, kitchen, lepton, libcgi-github-webhook-perl, libcypher-parser, libimporter-perl, liblist-someutils-perl, liblouis, liblouisutdml, libneo4j-client, libosinfo, libsys-cpuaffinity-perl, libtest2-suite-perl, linux, linux-grsec, lua-basexx, lua-compat53, lua-fifo, lua-http, lua-lpeg-patterns, lua-mmdb, lua-openssl, mash, mysql-5.7, node-quickselect, nsntrace, nvidia-graphics-drivers, nvidia-graphics-drivers-legacy-304xx, nvidia-graphics-drivers-legacy-340xx, openorienteering-mapper, oslo-sphinx, p4est, patator, petsc, php-mailparse, php-yaml, pykdtree, pypass, python-bioblend, python-cotyledon, python-jack-client, python-mido, python-openid-cla, python-os-api-ref, python-pydotplus, python-qtconsole, python-repoze.sphinx.autointerface, python-vispy, python-zenoss, r-cran-bbmle, r-cran-corpcor, r-cran-ellipse, r-cran-minpack.lm, r-cran-rglwidget, r-cran-rngtools, r-cran-scatterd3, r-cran-shinybs, r-cran-tibble, reproject, retext, ring, ruby-github-api, ruby-rails-assets-jquery-ui, ruby-swd, ruby-url-safe-base64, ruby-vmstat, ruby-webfinger, rustc, shadowsocks-libev, slepc, staticsite, steam, straight.plugin, svgwrite, tasksh, u-msgpack-python, ufo2otf, user-mode-linux, utf8proc, vizigrep, volk, wchartype, websockify & wireguard.

2 May 2016

Vincent Bernat: Pragmatic Debian packaging

While the creation of Debian packages is abundantly documented, most tutorials are targeted to packages implementing the Debian policy. Moreover, Debian packaging has a reputation of being unnecessarily difficult1 and many people prefer to use less constrained tools2 like fpm or CheckInstall. However, I would like to show how building Debian packages with the official tools can become straightforward if you bend some rules:
  1. No source package will be generated. Packages will be built directly from a checkout of a VCS repository.
  2. Additional dependencies can be downloaded during build. Packaging individually each dependency is a painstaking work, notably when you have to deal with some fast-paced ecosystems like Java, Javascript and Go.
  3. The produced packages may bundle dependencies. This is likely to raise some concerns about security and long-term maintenance, but this is a common trade-off in many ecosystems, notably Java, Javascript and Go.

Pragmatic packages 101 In the Debian archive, you have two kinds of packages: the source packages and the binary packages. Each binary package is built from a source package. You need a name for each package. As stated in the introduction, we won t generate a source package but we will work with its unpacked form which is any source tree containing a debian/ directory. In our examples, we will start with a source tree containing only a debian/ directory but you are free to include this debian/ directory into an existing project. As an example, we will package memcached, a distributed memory cache. There are four files to create:
  • debian/compat,
  • debian/changelog,
  • debian/control, and
  • debian/rules.
The first one is easy. Just put 9 in it:
echo 9 > debian/compat
The second one has the following content:
memcached (0-0) UNRELEASED; urgency=medium
  * Fake entry
 -- Happy Packager <happy@example.com>  Tue, 19 Apr 2016 22:27:05 +0200
The only important information is the name of the source package, memcached, on the first line. Everything else can be left as is as it won t influence the generated binary packages.

The control file debian/control describes the metadata of both the source package and the generated binary packages. We have to write a block for each of them.
Source: memcached
Maintainer: Vincent Bernat <bernat@debian.org>
Package: memcached
Architecture: any
Description: high-performance memory object caching system
The source package is called memcached. We have to use the same name as in debian/changelog. We generate only one binary package: memcached. In the remaining of the example, when you see memcached, this is the name of a binary package. The Architecture field should be set to either any or all. Use all exclusively if the package contains only arch-independent files. In doubt, just stick to any. The Description field contains a short description of the binary package.

The build recipe The last mandatory file is debian/rules. It s the recipe of the package. We need to retrieve memcached, build it and install its file tree in debian/memcached/. It looks like this:
#!/usr/bin/make -f
DISTRIBUTION = $(shell lsb_release -sr)
VERSION = 1.4.25
PACKAGEVERSION = $(VERSION)-0~$(DISTRIBUTION)0
TARBALL = memcached-$(VERSION).tar.gz
URL = http://www.memcached.org/files/$(TARBALL)
%:
    dh $@
override_dh_auto_clean:
override_dh_auto_test:
override_dh_auto_build:
override_dh_auto_install:
    wget -N --progress=dot:mega $(URL)
    tar --strip-components=1 -xf $(TARBALL)
    ./configure --prefix=/usr
    make
    make install DESTDIR=debian/memcached
override_dh_gencontrol:
    dh_gencontrol -- -v$(PACKAGEVERSION)
The empty targets override_dh_auto_clean, override_dh_auto_test and override_dh_auto_build keep debhelper from being too smart. The override_dh_gencontrol target sets the package version3 without updating debian/changelog. If you ignore the slight boilerplate, the recipe is quite similar to what you would have done with fpm:
DISTRIBUTION=$(lsb_release -sr)
VERSION=1.4.25
PACKAGEVERSION=$ VERSION -0~$ DISTRIBUTION 0
TARBALL=memcached-$ VERSION .tar.gz
URL=http://www.memcached.org/files/$ TARBALL 
wget -N --progress=dot:mega $ URL 
tar --strip-components=1 -xf $ TARBALL 
./configure --prefix=/usr
make
make install DESTDIR=/tmp/installdir
# Build the final package
fpm -s dir -t deb \
    -n memcached \
    -v $ PACKAGEVERSION  \
    -C /tmp/installdir \
    --description "high-performance memory object caching system"
You can review the whole package tree on GitHub and build it with dpkg-buildpackage -us -uc -b.

Pragmatic packages 102 At this point, we can iterate and add several improvements to our memcached package. None of those are mandatory but they are usually worth the additional effort.

Build dependencies Our initial build recipe only work when several packages are installed, like wget and libevent-dev. They are not present on all Debian systems. You can easily express that you need them by adding a Build-Depends section for the source package in debian/control:
Source: memcached
Build-Depends: debhelper (>= 9),
               wget, ca-certificates, lsb-release,
               libevent-dev
Always specify the debhelper (>= 9) dependency as we heavily rely on it. We don t require make or a C compiler because it is assumed that the build-essential meta-package is installed and it pulls those. dpkg-buildpackage will complain if the dependencies are not met. If you want to install those packages from your CI system, you can use the following command4:
mk-build-deps \
    -t 'apt-get -o Debug::pkgProblemResolver=yes --no-install-recommends -qqy' \
    -i -r debian/control
You may also want to investigate pbuilder or sbuild, two tools to build Debian packages in a clean isolated environment.

Runtime dependencies If the resulting package is installed on a freshly installed machine, it won t work because it will be missing libevent, a required library for memcached. You can express the dependencies needed by each binary package by adding a Depends field. Moreover, for dynamic libraries, you can automatically get the right dependencies by using some substitution variables:
Package: memcached
Depends: $ misc:Depends , $ shlibs:Depends 
The resulting package will contain the following information:
$ dpkg -I ../memcached_1.4.25-0\~unstable0_amd64.deb   grep Depends
 Depends: libc6 (>= 2.17), libevent-2.0-5 (>= 2.0.10-stable)

Integration with init system Most packaged daemons come with some integration with the init system. This integration ensures the daemon will be started on boot and restarted on upgrade. For Debian-based distributions, there are several init systems available. The most prominent ones are:
  • System-V init is the historical init system. More modern inits are able to reuse scripts written for this init, so this is a safe common denominator for packaged daemons.
  • Upstart is the less-historical init system for Ubuntu (used in Ubuntu 14.10 and previous releases).
  • systemd is the default init system for Debian since Jessie and for Ubuntu since 15.04.
Writing a correct script for the System-V init is error-prone. Therefore, I usually prefer to provide a native configuration file for the default init system of the targeted distribution (Upstart and systemd).

System-V If you want to provide a System-V init script, have a look at /etc/init.d/skeleton on the most ancient distribution you want to target and adapt it5. Put the result in debian/memcached.init. It will be installed at the right place, invoked on install, upgrade and removal. On Debian-based systems, many init scripts allow user customizations by providing a /etc/default/memcached file. You can ship one by putting its content in debian/memcached.default.

Upstart Providing an Upstart job is similar: put it in debian/memcached.upstart. For example:
description "memcached daemon"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 5 60
expect daemon
script
  . /etc/default/memcached
  exec memcached -d -u $USER -p $PORT -m $CACHESIZE -c $MAXCONN $OPTIONS
end script
When writing an Upstart job, the most important directive is expect. Be sure to get it right. Here, we use expect daemon and memcached is started with the -d flag.

systemd Providing a systemd unit is a bit more complex. The content of the file should go in debian/memcached.service. For example:
[Unit]
Description=memcached daemon
After=network.target
[Service]
Type=forking
EnvironmentFile=/etc/default/memcached
ExecStart=/usr/bin/memcached -d -u $USER -p $PORT -m $CACHESIZE -c $MAXCONN $OPTIONS
Restart=on-failure
[Install]
WantedBy=multi-user.target
We reuse /etc/default/memcached even if it is not considered a good practice with systemd6. Like for Upstart, the directive Type is quite important. We used forking as memcached is started with the -d flag. You also need to add a build-dependency to dh-systemd in debian/control:
Source: memcached
Build-Depends: debhelper (>= 9),
               wget, ca-certificates, lsb-release,
               libevent-dev,
               dh-systemd
And you need to modify the default rule in debian/rules:
%:
    dh $@ --with systemd
The extra complexity is a bit unfortunate but systemd integration is not part of debhelper7. Without those additional modifications, the unit will get installed but you won t get a proper integration and the service won t be enabled on install or boot.

Dedicated user Many daemons don t need to run as root and it is a good practice to ship a dedicated user. In the case of memcached, we can provide a _memcached user8. Add a debian/memcached.postinst file with the following content:
#!/bin/sh
set -e
case "$1" in
    configure)
        adduser --system --disabled-password --disabled-login --home /var/empty \
                --no-create-home --quiet --force-badname --group _memcached
        ;;
esac
#DEBHELPER#
exit 0
There is no cleanup of the user when the package is removed for two reasons:
  1. Less stuff to write.
  2. The user could still own some files.
The utility adduser will do the right thing whatever the requested user already exists or not. You need to add it as a dependency in debian/control:
Package: memcached
Depends: $ misc:Depends , $ shlibs:Depends , adduser
The #DEBHELPER# marker is important as it will be replaced by some code to handle the service configuration files (or some other stuff). You can review the whole package tree on GitHub and build it with dpkg-buildpackage -us -uc -b.

Pragmatic packages 103 It is possible to leverage debhelper to reduce the recipe size and to make it more declarative. This section is quite optional and it requires understanding a bit more how a Debian package is built. Feel free to skip it.

The big picture There are four steps to build a regular Debian package:
  1. debian/rules clean should clean the source tree to make it pristine.
  2. debian/rules build should trigger the build. For an autoconf-based software, like memcached, this step should execute something like ./configure && make.
  3. debian/rules install should install the file tree of each binary package. For an autoconf-based software, this step should execute make install DESTDIR=debian/memcached.
  4. debian/rules binary will pack the different file trees into binary packages.
You don t directly write each of those targets. Instead, you let dh, a component of debhelper, do most of the work. The following debian/rules file should do almost everything correctly with many source packages:
#!/usr/bin/make -f
%:
    dh $@
For each of the four targets described above, you can run dh with --no-act to see what it would do. For example:
$ dh build --no-act
   dh_testdir
   dh_update_autotools_config
   dh_auto_configure
   dh_auto_build
   dh_auto_test
Each of those helpers has a manual page. Helpers starting with dh_auto_ are a bit magic . For example, dh_auto_configure will try to automatically configure a package prior to building: it will detect the build system and invoke ./configure, cmake or Makefile.PL. If one of the helpers do not do the right thing, you can replace it by using an override target:
override_dh_auto_configure:
    ./configure --with-some-grog
Those helpers are also configurable, so you can just alter a bit their behaviour by invoking them with additional options:
override_dh_auto_configure:
    dh_auto_configure -- --with-some-grog
This way, ./configure will be called with your custom flag but also with a lot of default flags like --prefix=/usr for better integration. In the initial memcached example, we overrode all those magic targets. dh_auto_clean, dh_auto_configure and dh_auto_build are converted to no-ops to avoid any unexpected behaviour. dh_auto_install is hijacked to do all the build process. Additionally, we modified the behavior of the dh_gencontrol helper by forcing the version number instead of using the one from debian/changelog.

Automatic builds As memcached is an autoconf-enabled package, dh knows how to build it: ./configure && make && make install. Therefore, we can let it handle most of the work with this debian/rules file:
#!/usr/bin/make -f
DISTRIBUTION = $(shell lsb_release -sr)
VERSION = 1.4.25
PACKAGEVERSION = $(VERSION)-0~$(DISTRIBUTION)0
TARBALL = memcached-$(VERSION).tar.gz
URL = http://www.memcached.org/files/$(TARBALL)
%:
    dh $@ --with systemd
override_dh_auto_clean:
    wget -N --progress=dot:mega $(URL)
    tar --strip-components=1 -xf $(TARBALL)
override_dh_auto_test:
    # Don't run the whitespace test
    rm t/whitespace.t
    dh_auto_test
override_dh_gencontrol:
    dh_gencontrol -- -v$(PACKAGEVERSION)
The dh_auto_clean target is hijacked to download and setup the source tree9. We don t override the dh_auto_configure step, so dh will execute the ./configure script with the appropriate options. We don t override the dh_auto_build step either: dh will execute make. dh_auto_test is invoked after the build and it will run the memcached test suite. We need to override it because one of the test is complaining about odd whitespaces in the debian/ directory. We suppress this rogue test and let dh_auto_test executes the test suite. dh_auto_install is not overriden either, so dh will execute some variant of make install. To get a better sense of the difference, here is a diff:
--- memcached-intermediate/debian/rules 2016-04-30 14:02:37.425593362 +0200
+++ memcached/debian/rules  2016-05-01 14:55:15.815063835 +0200
@@ -12,10 +12,9 @@
 override_dh_auto_clean:
-override_dh_auto_test:
-override_dh_auto_build:
-override_dh_auto_install:
    wget -N --progress=dot:mega $(URL)
    tar --strip-components=1 -xf $(TARBALL)
-   ./configure --prefix=/usr
-   make
-   make install DESTDIR=debian/memcached
+
+override_dh_auto_test:
+   # Don't run the whitespace test
+   rm t/whitespace.t
+   dh_auto_test
It is up to you to decide if dh can do some work for you, but you could try to start from a minimal debian/rules and only override some targets.

Install additional files While make install installed the essential files for memcached, you may want to put additional files in the binary package. You could use cp in your build recipe, but you can also declare them:
  • files listed in debian/memcached.docs will be copied to /usr/share/doc/memcached by dh_installdocs,
  • files listed in debian/memcached.examples will be copied to /usr/share/doc/memcached/examples by dh_installexamples,
  • files listed in debian/memcached.manpages will be copied to the appropriate subdirectory of /usr/share/man by dh_installman,
Here is an example using wildcards for debian/memcached.docs:
doc/*.txt
If you need to copy some files to an arbitrary location, you can list them along with their destination directories in debian/memcached.install and dh_install will take care of the copy. Here is an example:
scripts/memcached-tool usr/bin
Using those files make the build process more declarative. It is a matter of taste and you are free to use cp in debian/rules instead. You can review the whole package tree on GitHub.

Other examples The GitHub repository contains some additional examples. They all follow the same scheme:
  • dh_auto_clean is hijacked to download and setup the source tree
  • dh_gencontrol is modified to use a computed version
Notably, you ll find daemons in Java, Go, Python and Node.js. The goal of those examples is to demonstrate that using Debian tools to build Debian packages can be straightforward. Hope this helps.

  1. People may remember the time before debhelper 7.0.50 (circa 2009) where debian/rules was a daunting beast. However, nowaday, the boilerplate is quite reduced.
  2. The complexity is not the only reason. Those alternative tools enable the creation of RPM packages, something that Debian tools obviously don t.
  3. There are many ways to version a package. Again, if you want to be pragmatic, the proposed solution should be good enough for Ubuntu. On Debian, it doesn t cover upgrade from one distribution version to another, but we assume that nowadays, systems get reinstalled instead of being upgraded.
  4. You also need to install devscripts and equivs package.
  5. It s also possible to use a script provided by upstream. However, there is no such thing as an init script that works on all distributions. Compare the proposed with the skeleton, check if it is using start-stop-daemon and if it sources /lib/lsb/init-functions before considering it. If it seems to fit, you can install it yourself in debian/memcached/etc/init.d/. debhelper will ensure its proper integration.
  6. Instead, a user wanting to customize the options is expected to edit the unit with systemctl edit.
  7. See #822670
  8. The Debian Policy doesn t provide any hint for the naming convention of those system users. A common usage is to prefix the daemon name with an underscore (like _memcached). Another common usage is to use Debian- as a prefix. The main drawback of the latest solution is that the name is likely to be replaced by the UID in ps and top because of its length.
  9. We could call dh_auto_clean at the end of the target to let it invoke make clean. However, it is assumed that a fresh checkout is used before each build.

10 April 2016

Vincent Bernat: Testing network software with pytest and Linux namespaces

Started in 2008, lldpd is an implementation of IEEE 802.1AB-2005 (aka LLDP) written in C. While it contains some unit tests, like many other network-related software at the time, the coverage of those is pretty poor: they are hard to write because the code is written in an imperative style and tighly coupled with the system. It would require extensive mocking1. While a rewrite (complete or iterative) would help to make the code more test-friendly, it would be quite an effort and it will likely introduce operational bugs along the way. To get better test coverage, the major features of lldpd are now verified through integration tests. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool.

pytest in a nutshell pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features:
  • you can directly use the assert keyword,
  • you can inject fixtures in any test function, and
  • you can parametrize tests.

Assertions With unittest, the unit testing framework included with Python, and many similar frameworks, unit tests have to be encapsulated into a class and use the provided assertion methods. For example:
class testArithmetics(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(1 + 3, 4)
The equivalent with pytest is simpler and more readable:
def test_addition():
    assert 1 + 3 == 4
pytest will analyze the AST and display useful error messages in case of failure. For further information, see Benjamin Peterson s article.

Fixtures A fixture is the set of actions performed in order to prepare the system to run some tests. With classic frameworks, you can only define one fixture for a set of tests:
class testInVM(unittest.TestCase):
    def setUp(self):
        self.vm = VM('Test-VM')
        self.vm.start()
        self.ssh = SSHClient()
        self.ssh.connect(self.vm.public_ip)
    def tearDown(self):
        self.ssh.close()
        self.vm.destroy()
    def test_hello(self):
        stdin, stdout, stderr = self.ssh.exec_command("echo hello")
        stdin.close()
        self.assertEqual(stderr.read(), b"")
        self.assertEqual(stdout.read(), b"hello\n")
In the example above, we want to test various commands on a remote VM. The fixture launches a new VM and configure an SSH connection. However, if the SSH connection cannot be established, the fixture will fail and the tearDown() method won t be invoked. The VM will be left running. Instead, with pytest, we could do this:
@pytest.yield_fixture
def vm():
    r = VM('Test-VM')
    r.start()
    yield r
    r.destroy()
@pytest.yield_fixture
def ssh(vm):
    ssh = SSHClient()
    ssh.connect(vm.public_ip)
    yield ssh
    ssh.close()
def test_hello(ssh):
    stdin, stdout, stderr = ssh.exec_command("echo hello")
    stdin.close()
    stderr.read() == b""
    stdout.read() == b"hello\n"
The first fixture will provide a freshly booted VM. The second one will setup an SSH connection to the VM provided as an argument. Fixtures are used through dependency injection: just give their names in the signature of the test functions and fixtures that need them. Each fixture only handle the lifetime of one entity. Whatever a dependent test function or fixture succeeds or fails, the VM will always be finally destroyed.

Parameters If you want to run the same test several times with a varying parameter, you can dynamically create test functions or use one test function with a loop. With pytest, you can parametrize test functions and fixtures:
@pytest.mark.parametrize("n1, n2, expected", [
    (1, 3, 4),
    (8, 20, 28),
    (-4, 0, -4)])
def test_addition(n1, n2, expected):
    assert n1 + n2 == expected

Testing lldpd The general plan for to test a feature in lldpd is the following:
  1. Setup two namespaces.
  2. Create a virtual link between them.
  3. Spawn a lldpd process in each namespace.
  4. Test the feature in one namespace.
  5. Check with lldpcli we get the expected result in the other.
Here is a typical test using the most interesting features of pytest:
@pytest.mark.skipif('LLDP-MED' not in pytest.config.lldpd.features,
                    reason="LLDP-MED not supported")
@pytest.mark.parametrize("classe, expected", [
    (1, "Generic Endpoint (Class I)"),
    (2, "Media Endpoint (Class II)"),
    (3, "Communication Device Endpoint (Class III)"),
    (4, "Network Connectivity Device")])
def test_med_devicetype(lldpd, lldpcli, namespaces, links,
                        classe, expected):
    links(namespaces(1), namespaces(2))
    with namespaces(1):
        lldpd("-r")
    with namespaces(2):
        lldpd("-M", str(classe))
    with namespaces(1):
        out = lldpcli("-f", "keyvalue", "show", "neighbors", "details")
        assert out['lldp.eth0.lldp-med.device-type'] == expected
First, the test will be executed only if lldpd was compiled with LLDP-MED support. Second, the test is parametrized. We will execute four distinct tests, one for each role that lldpd should be able to take as an LLDP-MED-enabled endpoint. The signature of the test has four parameters that are not covered by the parametrize() decorator: lldpd, lldpcli, namespaces and links. They are fixtures. A lot of magic happen in those to keep the actual tests short:
  • lldpd is a factory to spawn an instance of lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, ), then call lldpd with the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the pytest.config.lldpd object that is used to record the features supported by lldpd and skip non-matching tests. You can read fixtures/programs.py for more details.
  • lldpcli is also a factory, but it spawns instances of lldpcli, the client to query lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate.
  • namespaces is one of the most interesting pieces. It is a factory for Linux namespaces. It will spawn a new namespace or refer to an existing one. It is possible to switch from one namespace to another (with with) as they are contexts. Behind the scene, the factory maintains the appropriate file descriptors for each namespace and switch to them with setns(). Once the test is done, everything is wipped out as the file descriptors are garbage collected. You can read fixtures/namespaces.py for more details. It is quite reusable in other projects2.
  • links contains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read fixtures/network.py for more details.
You can see an example of a test run on the Travis build for 0.9.2. Since each test is correctly isolated, it s possible to run parallel tests with pytest -n 10 --boxed. To catch even more bugs, both the address sanitizer (ASAN) and the undefined behavior sanitizer (UBSAN) are enabled. In case of a problem, notably a memory leak, the faulty program will exit with a non-zero exit code and the associated test will fail.

  1. A project like cwrap would definitely help. However, it lacks support for Netlink and raw sockets that are essential in lldpd operations.
  2. There are three main limitations in the use of namespaces with this fixture. First, when creating a user namespace, only root is mapped to the current user. With lldpd, we have two users (root and _lldpd). Therefore, the tests have to run as root. The second limitation is with the PID namespace. It s not possible for a process to switch from one PID namespace to another. When you call setns() on a PID namespace, only children of the current process will be in the new PID namespace. The PID namespace is convenient to ensure everyone gets killed once the tests are terminated but you must keep in mind that /proc must be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don t use threads in tests. Use multiprocessing module instead.

3 January 2016

Lunar: Reproducible builds: week 35 in Stretch cycle

What happened in the reproducible builds effort between December 20th to December 26th: Toolchain fixes Mattia Rizzolo rebased our experimental versions of debhelper (twice!) and dpkg on top of the latest releases. Reiner Herrmann submited a patch for mozilla-devscripts to sort the file list in generated preferences.js files. To be able to lift the restriction that packages must be built in the same path, translation support for the __FILE__ C pre-processor macro would also be required. Joerg Sonnenberger submitted a patch back in 2010 that would still be useful today. Chris Lamb started work on providing a deterministic mode for debootstrap. Packages fixed The following packages have become reproducible due to changes in their build dependencies: bouncycastle, cairo-dock-plug-ins, darktable, gshare, libgpod, pafy, ruby-redis-namespace, ruby-rouge, sparkleshare. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net Statistics for package sets are now visible for the armhf architecture. (h01ger) The second build now has a longer timeout (18 hours) than the first build (12 hours). This should prevent wasting resources when a machine is loaded. (h01ger) Builds of Arch Linux packages are now done using a tmpfs. (h01ger) 200 GiB have been added to jenkins.debian.net (thanks to ProfitBricks!) to make room for new jobs. The current count is at 962 and growing! diffoscope development Aside from some minor bugs that have been fixed, a one-line change made huge memory (and time) savings as the output of transformation tool is now streamed line by line instead of loaded entirely in memory at once. disorderfs development Andrew Ayer released disorderfs version 0.4.2-1 on December 22th. It fixes a memory corruption error when processing command line arguments that could cause command line options to be ignored. Documentation update Many small improvements for the documentation on reproducible-builds.org sent by Georg Koppen were merged. Package reviews 666 (!) reviews have been removed, 189 added and 162 updated in the previous week. 151 new fail to build from source reports have been made by Chris West, Chris Lamb, Mattia Rizzolo, and Niko Tyni. New issues identified: unsorted_filelist_in_xul_ext_preferences, nondeterminstic_output_generated_by_moarvm. Misc. Steven Chamberlain drew our attention to one analysis of the Juniper ScreenOS Authentication Backdoor: Whilst this may have been added in source code, it was well-disguised in the disassembly and just 7 instructions long. I thought this was a good example of the current state-of-the-art, and why we'd like our binaries and eventually, installer and VM images reproducible IMHO. Joanna Rutkowska has mentioned possible ways for Qubes to become reproducible on their development mailing-list.

24 September 2015

Petter Reinholdtsen: The life and death of a laptop battery

When I get a new laptop, the battery life time at the start is OK. But this do not last. The last few laptops gave me a feeling that within a year, the life time is just a fraction of what it used to be, and it slowly become painful to use the laptop without power connected all the time. Because of this, when I got a new Thinkpad X230 laptop about two years ago, I decided to monitor its battery state to have more hard facts when the battery started to fail. First I tried to find a sensible Debian package to record the battery status, assuming that this must be a problem already handled by someone else. I found battery-stats, which collects statistics from the battery, but it was completely broken. I sent a few suggestions to the maintainer, but decided to write my own collector as a shell script while I waited for feedback from him. Via a blog post about the battery development on a MacBook Air I also discovered batlog, not available in Debian. I started my collector 2013-07-15, and it has been collecting battery stats ever since. Now my /var/log/hjemmenett-battery-status.log file contain around 115,000 measurements, from the time the battery was working great until now, when it is unable to charge above 7% of original capacity. My collector shell script is quite simple and look like this:
#!/bin/sh
# Inspired by
# http://www.ifweassume.com/2013/08/the-de-evolution-of-my-laptop-battery.html
# See also
# http://blog.sleeplessbeastie.eu/2013/01/02/debian-how-to-monitor-battery-capacity/
logfile=/var/log/hjemmenett-battery-status.log
files="manufacturer model_name technology serial_number \
    energy_full energy_full_design energy_now cycle_count status"
if [ ! -e "$logfile" ] ; then
    (
	printf "timestamp,"
	for f in $files; do
	    printf "%s," $f
	done
	echo
    ) > "$logfile"
fi
log_battery()  
    # Print complete message in one echo call, to avoid race condition
    # when several log processes run in parallel.
    msg=$(printf "%s," $(date +%s); \
	for f in $files; do \
	    printf "%s," $(cat $f); \
	done)
    echo "$msg"
 
cd /sys/class/power_supply
for bat in BAT*; do
    (cd $bat && log_battery >> "$logfile")
done
The script is called when the power management system detect a change in the power status (power plug in or out), and when going into and out of hibernation and suspend. In addition, it collect a value every 10 minutes. This make it possible for me know when the battery is discharging, charging and how the maximum charge change over time. The code for the Debian package is now available on github. The collected log file look like this:
timestamp,manufacturer,model_name,technology,serial_number,energy_full,energy_full_design,energy_now,cycle_count,status,
1376591133,LGC,45N1025,Li-ion,974,62800000,62160000,39050000,0,Discharging,
[...]
1443090528,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,
1443090601,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,
I wrote a small script to create a graph of the charge development over time. This graph depicted above show the slow death of my laptop battery. But why is this happening? Why are my laptop batteries always dying in a year or two, while the batteries of space probes and satellites keep working year after year. If we are to believe Battery University, the cause is me charging the battery whenever I have a chance, and the fix is to not charge the Lithium-ion batteries to 100% all the time, but to stay below 90% of full charge most of the time. I've been told that the Tesla electric cars limit the charge of their batteries to 80%, with the option to charge to 100% when preparing for a longer trip (not that I would want a car like Tesla where rights to privacy is abandoned, but that is another story), which I guess is the option we should have for laptops on Linux too. Is there a good and generic way with Linux to tell the battery to stop charging at 80%, unless requested to charge to 100% once in preparation for a longer trip? I found one recipe on askubuntu for Ubuntu to limit charging on Thinkpad to 80%, but could not get it to work (kernel module refused to load). I wonder why the battery capacity was reported to be more than 100% at the start. I also wonder why the "full capacity" increases some times, and if it is possible to repeat the process to get the battery back to design capacity. And I wonder if the discharge and charge speed change over time, or if this stay the same. I did not yet try to write a tool to calculate the derivative values of the battery level, but suspect some interesting insights might be learned from those. Update 2015-09-24: I got a tip to install the packages acpi-call-dkms and tlp (unfortunately missing in Debian stable) packages instead of the tp-smapi-dkms package I had tried to use initially, and use 'tlp setcharge 40 80' to change when charging start and stop. I've done so now, but expect my existing battery is toast and need to be replaced. The proposal is unfortunately Thinkpad specific.

1 September 2015

Lunar: Reproducible builds: week 18 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Aur lien Jarno uploaded glibc/2.21-0experimental1 which will fix the issue were locales-all did not behave exactly like locales despite having it in the Provides field. Lunar rebased the pu/reproducible_builds branch for dpkg on top of the released 1.18.2. This made visible an issue with udebs and automatically generated debug packages. The summary from the meeting at DebConf15 between ftpmasters, dpkg mainatainers and reproducible builds folks has been posted to the revelant mailing lists. Packages fixed The following 70 packages became reproducible due to changes in their build dependencies: activemq-activeio, async-http-client, classworlds, clirr, compress-lzf, dbus-c++, felix-bundlerepository, felix-framework, felix-gogo-command, felix-gogo-runtime, felix-gogo-shell, felix-main, felix-shell-tui, felix-shell, findbugs-bcel, gco, gdebi, gecode, geronimo-ejb-3.2-spec, git-repair, gmetric4j, gs-collections, hawtbuf, hawtdispatch, jack-tools, jackson-dataformat-cbor, jackson-dataformat-yaml, jackson-module-jaxb-annotations, jmxetric, json-simple, kryo-serializers, lhapdf, libccrtp, libclaw, libcommoncpp2, libftdi1, libjboss-marshalling-java, libmimic, libphysfs, libxstream-java, limereg, maven-debian-helper, maven-filtering, maven-invoker, mochiweb, mongo-java-driver, mqtt-client, netty-3.9, openhft-chronicle-queue, openhft-compiler, openhft-lang, pavucontrol, plexus-ant-factory, plexus-archiver, plexus-bsh-factory, plexus-cdc, plexus-classworlds2, plexus-component-metadata, plexus-container-default, plexus-io, pytone, scolasync, sisu-ioc, snappy-java, spatial4j-0.4, tika, treeline, wss4j, xtalk, zshdb. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: Chris Lamb also noticed that binaries shipped with libsilo-bin did not work. Documentation update Chris Lamb and Ximin Luo assembled a proper specification for SOURCE_DATE_EPOCH in the hope to convince more upstreams to adopt it. Thanks to Holger it is published under a non-Debian domain name. Lunar documented easiest way to solve issues with file ordering and timestamps in tarballs that came with tar/1.28-1. Some examples on how to use SOURCE_DATE_EPOCH have been improved to support systems without GNU date. reproducible.debian.net armhf is finally being tested, which also means the remote building of Debian packages finally works! This paves the way to perform the tests on even more architectures and doing variations on CPU and date. Some packages even produce the same binary Arch:all packages on different architectures (1, 2). (h01ger) Tests for FreeBSD are finally running. (h01ger) As it seems the gcc5 transition has cooled off, we schedule sid more often than testing again on amd64. (h01ger) disorderfs has been built and installed on all build nodes (amd64 and armhf). One issue related to permissions for root and unpriviliged users needs to be solved before disorderfs can be used on reproducible.debian.net. (h01ger) strip-nondeterminism Version 0.011-1 has been released on August 29th. The new version updates dh_strip_nondeterminism to match recent changes in debhelper. (Andrew Ayer) disorderfs disorderfs, the new FUSE filesystem to ease testing of filesystem-related variations, is now almost ready to be used. Version 0.2.0 adds support for extended attributes. Since then Andrew Ayer also added support to reverse directory entries instead of shuffling them, and arbitrary padding to the number of blocks used by files. Package reviews 142 reviews have been removed, 48 added and 259 updated this week. Santiago Vila renamed the not_using_dh_builddeb issue into varying_mtimes_in_data_tar_gz_or_control_tar_gz to align better with other tag names. New issue identified this week: random_order_in_python_doit_completion. 37 FTBFS issues have been reported by Chris West (Faux) and Chris Lamb. Misc. h01ger gave a talk at FrOSCon on August 23rd. Recordings are already online. These reports are being reviewed and enhanced every week by many people hanging out on #debian-reproducible. Huge thanks!

25 August 2015

Lunar: Reproducible builds: week 17 in Stretch cycle

A good amount of the Debian reproducible builds team had the chance to enjoy face-to-face interactions during DebConf15.
Names in red and blue were all present at DebConf15
Picture of the  reproducible builds  talk during DebConf15
Hugging people with whom one has been working tirelessly for months gives a lot of warm-fuzzy feelings. Several recorded and hallway discussions paved the way to solve the remaining issues to get reproducible builds part of Debian proper. Both talks from the Debian Project Leader and the release team mentioned the effort as important for the future of Debian. A forty-five minutes talk presented the state of the reproducible builds effort. It was then followed by an hour long roundtable to discuss current blockers regarding dpkg, .buildinfo and their integration in the archive. Picture of the  reproducible builds  roundtable during DebConf15 Toolchain fixes Reiner Herrmann submitted a patch to make rdfind sort the processed files before doing any operation. Chris Lamb proposed a new patch for wheel implementing support for SOURCE_DATE_EPOCH instead of the custom WHEEL_FORCE_TIMESTAMP. akira sent one making man2html SOURCE_DATE_EPOCH aware. St phane Glondu reported that dpkg-source would not respect tarball permissions when unpacking under a umask of 002. After hours of iterative testing during the DebConf workshop, Sandro Knau created a test case showing how pdflatex output can be non-deterministic with some PNG files. Packages fixed The following 65 packages became reproducible due to changes in their build dependencies: alacarte, arbtt, bullet, ccfits, commons-daemon, crack-attack, d-conf, ejabberd-contrib, erlang-bear, erlang-cherly, erlang-cowlib, erlang-folsom, erlang-goldrush, erlang-ibrowse, erlang-jiffy, erlang-lager, erlang-lhttpc, erlang-meck, erlang-p1-cache-tab, erlang-p1-iconv, erlang-p1-logger, erlang-p1-mysql, erlang-p1-pam, erlang-p1-pgsql, erlang-p1-sip, erlang-p1-stringprep, erlang-p1-stun, erlang-p1-tls, erlang-p1-utils, erlang-p1-xml, erlang-p1-yaml, erlang-p1-zlib, erlang-ranch, erlang-redis-client, erlang-uuid, freecontact, givaro, glade, gnome-shell, gupnp, gvfs, htseq, jags, jana, knot, libconfig, libkolab, libmatio, libvsqlitepp, mpmath, octave-zenity, openigtlink, paman, pisa, pynifti, qof, ruby-blankslate, ruby-xml-simple, timingframework, trace-cmd, tsung, wings3d, xdg-user-dirs, xz-utils, zpspell. The following packages became reproducible after getting fixed: Uploads that might have fixed reproducibility issues: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: St phane Glondu reported two issues regarding embedded build date in omake and cduce. Aur lien Jarno submitted a fix for the breakage of make-dfsg test suite. As binutils now creates deterministic libraries by default, Aur lien's patch makes use of a wrapper to give the U flag to ar. Reiner Herrmann reported an issue with pound which embeds random dhparams in its code during the build. Better solutions are yet to be found. reproducible.debian.net Package pages on reproducible.debian.net now have a new layout improving readability designed by Mattia Rizzolo, h01ger, and Ulrike. The navigation is now on the left as vertical space is more valuable nowadays. armhf is now enabled on all pages except the dashboard. Actual tests on armhf are expected to start shortly. (Mattia Rizzolo, h01ger) The limit on how many packages people can schedule using the reschedule script on Alioth has been bumped to 200. (h01ger) mod_rewrite is now used instead of JavaScript for the form in the dashboard. (h01ger) Following the rename of the software, debbindiff has mostly been replaced by either diffoscope or differences in generated HTML and IRC notification output. Connections to UDD have been made more robust. (Mattia Rizzolo) diffoscope development diffoscope version 31 was released on August 21st. This version improves fuzzy-matching by using the tlsh algorithm instead of ssdeep. New command line options are available: --max-diff-input-lines and --max-diff-block-lines to override limits on diff input and output (Reiner Herrmann), --debugger to dump the user into pdb in case of crashes (Mattia Rizzolo). jar archives should now be detected properly (Reiner Herrman). Several general code cleanups were also done by Chris Lamb. strip-nondeterminism development Andrew Ayer released strip-nondeterminism version 0.010-1. Java properties file in jar should now be detected more accurately. A missing dependency spotted by St phane Glondu has been added. Testing directory ordering issues: disorderfs During the reproducible builds workshop at DebConf, participants identified that we were still short of a good way to test variations on filesystem behaviors (e.g. file ordering or disk usage). Andrew Ayer took a couple of hours to create disorderfs. Based on FUSE, disorderfs in an overlay filesystem that will mount the content of a directory at another location. For this first version, it will make the order in which files appear in a directory random. Documentation update Dhole documented how to implement support for SOURCE_DATE_EPOCH in Python, bash, Makefiles, CMake, and C. Chris Lamb started to convert the wiki page describing SOURCE_DATE_EPOCH into a Freedesktop-like specification in the hope that it will convince more upstream to adopt it. Package reviews 44 reviews have been removed, 192 added and 77 updated this week. New issues identified this week: locale_dependent_order_in_devlibs_depends, randomness_in_ocaml_startup_files, randomness_in_ocaml_packed_libraries, randomness_in_ocaml_custom_executables, undeterministic_symlinking_by_rdfind, random_build_path_by_golang_compiler, and images_in_pdf_generated_by_latex. 117 new FTBFS bugs have been reported by Chris Lamb, Chris West (Faux), and Niko Tyni. Misc. Some reproducibility issues might face us very late. Chris Lamb noticed that the test suite for python-pykmip was now failing because its test certificates have expired. Let's hope no packages are hiding a certificate valid for 10 years somewhere in their source! Pictures courtesy and copyright of Debian's own paparazzi: Aigars Mahinovs.

20 June 2015

Lunar: Reproducible builds: week 5 in Stretch cycle

What happened about the reproducible builds effort for this week: Toolchain fixes Uploads that should help other packages: Patch submitted for toolchain issues: Some discussions have been started in Debian and with upstream: Packages fixed The following 8 packages became reproducible due to changes in their build dependencies: access-modifier-checker, apache-log4j2, jenkins-xstream, libsdl-perl, maven-shared-incremental, ruby-pygments.rb, ruby-wikicloth, uimaj. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Discussions that have been started: reproducible.debian.net Holger Levsen added two new package sets: pkg-javascript-devel and pkg-php-pear. The list of packages with and without notes are now sorted by age of the latest build. Mattia Rizzolo added support for email notifications so that maintainers can be warned when a package becomes unreproducible. Please ask Mattia or Holger or in the #debian-reproducible IRC channel if you want to be notified for your packages! strip-nondeterminism development Andrew Ayer fixed the gzip handler so that it skip adding a predetermined timestamp when there was none. Documentation update Lunar added documentation about mtimes of file extracted using unzip being timezone dependent. He also wrote a short example on how to test reproducibility. Stephen Kitt updated the documentation about timestamps in PE binaries. Documentation and scripts to perform weekly reports were published by Lunar. Package reviews 50 obsolete reviews have been removed, 51 added and 29 updated this week. Thanks Chris West and Mathieu Bridon amongst others. New identified issues: Misc. Lunar will be talking (in French) about reproducible builds at Pas Sage en Seine on June 19th, at 15:00 in Paris. Meeting will happen this Wednesday, 19:00 UTC.

9 June 2015

Daniel Silverstone: Sometimes recruiters really miss the point...

I get quite a bit of recruitment spam, especially via my LinkedIn profile, but today's Twitter-madness (recruiter scraped my twitter and then contacted me) really took the biscuit. I include my response (stripped of identifying marks) for your amusement:
On Tue, Jun 09, 2015 at 10:30:35 +0000, Silly Recruiter wrote:
> I have come across your profile on various social media platforms today and
> after looking through them I feel you are a good fit for a permanent Java
> Developer Role I have available.
Given that you followed me on Twitter I'm assuming you found a tweet or two in
which I mention how much I hate Java?
> I can see you are currently working at Codethink and was wondering if you
> were considering a change of role?
I am not.
> The role on offer is working as a Java Developer for a company based in
> Manchester. You will be maintaining and enhancing the company's core websites
> whilst using the technologies Java, JavaScript, JSP, Struts, Hibernate XML
> and Grails.
This sounds like one of my worst nightmares.
> Are you interested in hearing more about the role? Please feel free to call
> or email me to discuss it further.
Thanks, but no.
> If not, do you know someone that is interested? We offer a  500 referral fee
> for any candidate that is successful.
I wouldn't inflict that kind of Lovecraftian nightmare of a software stack on
anyone I cared about, sorry.
D.
I then decided to take a look back over my Twitter and see if I could find what might have tripped this. There's some discussion of Minecraft modding but nothing which would suggest JavaScript, JSP, Struts, Hibernate XML or Grails. Indeed my most recent tweet regarding Java could hardly be construed as positive towards it. Sigh.

27 May 2015

Vincent Bernat: Live patching QEMU for VENOM mitigation

CVE-2015-3456, also known as VENOM, is a security vulnerability in QEMU virtual floppy controller:
The Floppy Disk Controller (FDC) in QEMU, as used in Xen [ ] and KVM, allows local guest users to cause a denial of service (out-of-bounds write and guest crash) or possibly execute arbitrary code via the FD_CMD_READ_ID, FD_CMD_DRIVE_SPECIFICATION_COMMAND, or other unspecified commands.
Even when QEMU has been configured with no floppy drive, the floppy controller code is still active. The vulnerability is easy to test1:
#define FDC_IOPORT 0x3f5
#define FD_CMD_READ_ID 0x0a
int main()  
    ioperm(FDC_IOPORT, 1, 1);
    outb(FD_CMD_READ_ID, FDC_IOPORT);
    for (size_t i = 0;; i++)
        outb(0x42, FDC_IOPORT);
    return 0;
 
Once the fix installed, all processes still have to be restarted for the upgrade to be effective. It is possible to minimize the downtime by leveraging virsh save. Another possibility would be to patch the running processes. The Linux kernel attracted a lot of interest in this area, with solutions like Ksplice (mostly killed by Oracle), kGraft (by Red Hat) and kpatch (by Suse) and the inclusion of a common framework in the kernel. The userspace has far less out-of-the-box solutions2. I present here a simple and self-contained way to patch a running QEMU to remove the vulnerability without requiring any sensible downtime. Here is a short demonstration:

Proof of concept First, let s find a workaround that would be simple to implement through live patching: while modifying running code text is possible, it is easier to modify a single variable.

Concept Looking at the code of the floppy controller and the patch, we can avoid the vulnerability by not accepting any command on the FIFO port. Each request would be answered by Invalid command (0x80) and a user won t be able to push more bytes to the FIFO until the answer is read and the FIFO queue reset. Of course, the floppy controller would be rendered useless in this state. But who cares? The list of commands accepted by the controller on the FIFO port is contained in the handlers[] array:
static const struct  
    uint8_t value;
    uint8_t mask;
    const char* name;
    int parameters;
    void (*handler)(FDCtrl *fdctrl, int direction);
    int direction;
  handlers[] =  
      FD_CMD_READ, 0x1f, "READ", 8, fdctrl_start_transfer, FD_DIR_READ  ,
      FD_CMD_WRITE, 0x3f, "WRITE", 8, fdctrl_start_transfer, FD_DIR_WRITE  ,
    /* [...] */
      0, 0, "unknown", 0, fdctrl_unimplemented  , /* default handler */
 ;
To avoid browsing the array each time a command is received, another array is used to map each command to the appropriate handler:
/* Associate command to an index in the 'handlers' array */
static uint8_t command_to_handler[256];
static void fdctrl_realize_common(FDCtrl *fdctrl, Error **errp)
 
    int i, j;
    static int command_tables_inited = 0;
    /* Fill 'command_to_handler' lookup table */
    if (!command_tables_inited)  
        command_tables_inited = 1;
        for (i = ARRAY_SIZE(handlers) - 1; i >= 0; i--)  
            for (j = 0; j < sizeof(command_to_handler); j++)  
                if ((j & handlers[i].mask) == handlers[i].value)  
                    command_to_handler[j] = i;
                 
             
         
     
    /* [...] */
 
Our workaround is to modify the command_to_handler[] array to map all commands to the fdctrl_unimplemented() handler (the last one in the handlers[] array).

Testing with gdb To check if the workaround works as expected, we test it with gdb. Unless you have compiled QEMU yourself, you need to install a package with debug symbols. Unfortunately, on Debian, they are not available, yet3. On Ubuntu, you can install the qemu-system-x86-dbgsym package after enabling the appropriate repositories. The following function for gdb maps every command to the unimplemented handler:
define patch
  set $handler = sizeof(handlers)/sizeof(*handlers)-1
  set $i = 0
  while ($i < 256)
   set variable command_to_handler[$i++] = $handler
  end
  printf "Done!\n"
end
Attach to the vulnerable process (with attach), call the function (with patch) and detach of the process (with detach). You can check that the exploit is not working anymore. This could be easily automated.

Limitations Using gdb has two main limitations:
  1. It needs to be installed on each host to be patched.
  2. The debug packages need to be installed as well. Moreover, it can be difficult to fetch previous versions of those packages.

Writing a custom patcher To overcome those limitations, we can write a customer patcher using the ptrace() system call without relying on debug symbols being present.

Finding the right memory spot Before being able to modify the command_to_handler[] array, we need to know its location. The first clue is given by the symbol table. To query it, use readelf -s:
$ readelf -s /usr/lib/debug/.build-id/09/95121eb46e2a4c13747ac2bad982829365c694.debug   \
>   sed -n -e 1,3p -e /command_to_handler/p
Symbol table '.symtab' contains 27066 entries:
   Num:    Value          Size Type    Bind   Vis      Ndx Name
  8485: 00000000009f9d00   256 OBJECT  LOCAL  DEFAULT   26 command_to_handler
This table is usually stripped out of the executable to save space, like shown below:
$ file -b /usr/bin/qemu-system-x86_64   tr , \\n
ELF 64-bit LSB shared object
 x86-64
 version 1 (SYSV)
 dynamically linked
 interpreter /lib64/ld-linux-x86-64.so.2
 for GNU/Linux 2.6.32
 BuildID[sha1]=0995121eb46e2a4c13747ac2bad982829365c694
 stripped
If your distribution provides a debug package, the debug symbols are installed in /usr/lib/debug. Most modern distributions are now relying on the build ID4 to map an executable to its debugging symbols, like the example above. Without a debug package, you need to recompile the existing package without stripping debug symbols in a clean environment5. On Debian, this can be done by setting the DEB_BUILD_OPTIONS environment variable to nostrip. We have now two possible cases:
  • the easy one, and
  • the hard one.

The easy case On x86, here is the standard layout of a regular Linux process in memory6: Memory layout of a regular process on x86 The random gaps (ASLR) are here to prevent an attacker from reliably jumping to a particular exploited function in memory. On x86-64, the layout is quite similar. The important point is that the base address of the executable is fixed. The memory mapping of a process is also available through /proc/PID/maps. Here is a shortened and annotated example on x86-64:
$ cat /proc/3609/maps
00400000-00401000         r-xp 00000000 fd:04 483  not-qemu [text segment]
00601000-00602000         r--p 00001000 fd:04 483  not-qemu [data segment]
00602000-00603000         rw-p 00002000 fd:04 483  not-qemu [BSS segment]
[random gap]
02419000-0293d000         rw-p 00000000 00:00 0    [heap]
[random gap]
7f0835543000-7f08356e2000 r-xp 00000000 fd:01 9319 /lib/x86_64-linux-gnu/libc-2.19.so
7f08356e2000-7f08358e2000 ---p 0019f000 fd:01 9319 /lib/x86_64-linux-gnu/libc-2.19.so
7f08358e2000-7f08358e6000 r--p 0019f000 fd:01 9319 /lib/x86_64-linux-gnu/libc-2.19.so
7f08358e6000-7f08358e8000 rw-p 001a3000 fd:01 9319 /lib/x86_64-linux-gnu/libc-2.19.so
7f08358e8000-7f08358ec000 rw-p 00000000 00:00 0
7f08358ec000-7f083590c000 r-xp 00000000 fd:01 5138 /lib/x86_64-linux-gnu/ld-2.19.so
7f0835aca000-7f0835acd000 rw-p 00000000 00:00 0
7f0835b08000-7f0835b0c000 rw-p 00000000 00:00 0
7f0835b0c000-7f0835b0d000 r--p 00020000 fd:01 5138 /lib/x86_64-linux-gnu/ld-2.19.so
7f0835b0d000-7f0835b0e000 rw-p 00021000 fd:01 5138 /lib/x86_64-linux-gnu/ld-2.19.so
7f0835b0e000-7f0835b0f000 rw-p 00000000 00:00 0
[random gap]
7ffdb0f85000-7ffdb0fa6000 rw-p 00000000 00:00 0    [stack]
With a regular executable, the value given in the symbol table is an absolute memory address:
$ readelf -s not-qemu   \
>   sed -n -e 1,3p -e /command_to_handler/p
Symbol table '.dynsym' contains 9 entries:
   Num:    Value          Size Type    Bind   Vis      Ndx Name
    47: 0000000000602080   256 OBJECT  LOCAL  DEFAULT   25 command_to_handler
So, the address of command_to_handler[], in the above example, is just 0x602080.

The hard case To enhance security, it is possible to load some executables at a random base address, just like a library. Such an executable is called a Position Independent Executable (PIE). An attacker won t be able to rely on a fixed address to find some helpful function. Here is the new memory layout: Memory layout of a PIE process on x86 With a PIE process, the value in the symbol table is now an offset from the base address.
$ readelf -s not-qemu-pie   sed -n -e 1,3p -e /command_to_handler/p
Symbol table '.dynsym' contains 17 entries:
   Num:    Value          Size Type    Bind   Vis      Ndx Name
    47: 0000000000202080   256 OBJECT  LOCAL  DEFAULT   25 command_to_handler
If we look at /proc/PID/maps, we can figure out where the array is located in memory:
$ cat /proc/12593/maps
7f6c13565000-7f6c13704000 r-xp 00000000 fd:01 9319  /lib/x86_64-linux-gnu/libc-2.19.so
7f6c13704000-7f6c13904000 ---p 0019f000 fd:01 9319  /lib/x86_64-linux-gnu/libc-2.19.so
7f6c13904000-7f6c13908000 r--p 0019f000 fd:01 9319  /lib/x86_64-linux-gnu/libc-2.19.so
7f6c13908000-7f6c1390a000 rw-p 001a3000 fd:01 9319  /lib/x86_64-linux-gnu/libc-2.19.so
7f6c1390a000-7f6c1390e000 rw-p 00000000 00:00 0
7f6c1390e000-7f6c1392e000 r-xp 00000000 fd:01 5138  /lib/x86_64-linux-gnu/ld-2.19.so
7f6c13b2e000-7f6c13b2f000 r--p 00020000 fd:01 5138  /lib/x86_64-linux-gnu/ld-2.19.so
7f6c13b2f000-7f6c13b30000 rw-p 00021000 fd:01 5138  /lib/x86_64-linux-gnu/ld-2.19.so
7f6c13b30000-7f6c13b31000 rw-p 00000000 00:00 0
7f6c13b31000-7f6c13b33000 r-xp 00000000 fd:04 4594  not-qemu-pie [text segment]
7f6c13cf0000-7f6c13cf3000 rw-p 00000000 00:00 0
7f6c13d2e000-7f6c13d32000 rw-p 00000000 00:00 0
7f6c13d32000-7f6c13d33000 r--p 00001000 fd:04 4594  not-qemu-pie [data segment]
7f6c13d33000-7f6c13d34000 rw-p 00002000 fd:04 4594  not-qemu-pie [BSS segment]
[random gap]
7f6c15c46000-7f6c15c67000 rw-p 00000000 00:00 0     [heap]
[random gap]
7ffe823b0000-7ffe823d1000 rw-p 00000000 00:00 0     [stack]
The base address is 0x7f6c13b31000, the offset is 0x202080 and therefore, the location of the array is 0x7f6c13d33080. We can check with gdb:
$ print &command_to_handler
$1 = (uint8_t (*)[256]) 0x7f6c13d33080 <command_to_handler>

Patching a memory spot Once we know the location of the command_to_handler[] array in memory, patching it is quite straightforward. First, we start tracing the target process:
/* Attach to the running process */
static int
patch_attach(pid_t pid)
 
    int status;
    printf("[.] Attaching to PID %d...\n", pid);
    if (ptrace(PTRACE_ATTACH, pid, NULL, NULL) == -1)  
        fprintf(stderr, "[!] Unable to attach to PID %d: %m\n", pid);
        return -1;
     
    if (waitpid(pid, &status, 0) == -1)  
        fprintf(stderr, "[!] Error while attaching to PID %d: %m\n", pid);
        return -1;
     
    assert(WIFSTOPPED(status)); /* Tracee may have died */
    if (ptrace(PTRACE_GETSIGINFO, pid, NULL, &si) == -1)  
        fprintf(stderr, "[!] Unable to read siginfo for PID %d: %m\n", pid);
        return -1;
     
    assert(si.si_signo == SIGSTOP); /* Other signals may have been received */
    printf("[*] Successfully attached to PID %d\n", pid);
    return 0;
 
Then, we retrieve the command_to_handler[] array, modify it and put it back in memory7.
static int
patch_doit(pid_t pid, unsigned char *target)
 
    int ret = -1;
    unsigned char *command_to_handler = NULL;
    size_t i;
    /* Get the table */
    printf("[.] Retrieving command_to_handler table...\n");
    command_to_handler = ptrace_read(pid,
                                     target,
                                     QEMU_COMMAND_TO_HANDLER_SIZE);
    if (command_to_handler == NULL)  
        fprintf(stderr, "[!] Unable to read command_to_handler table: %m\n");
        goto out;
     
    /* Check if the table has already been patched. */
    /* [...] */
    /* Patch it */
    printf("[.] Patching QEMU...\n");
    for (i = 0; i < QEMU_COMMAND_TO_HANDLER_SIZE; i++)  
        command_to_handler[i] = QEMU_NOT_IMPLEMENTED_HANDLER;
     
    if (ptrace_write(pid, target, command_to_handler,
           QEMU_COMMAND_TO_HANDLER_SIZE) == -1)  
        fprintf(stderr, "[!] Unable to patch command_to_handler table: %m\n");
        goto out;
     
    printf("[*] QEMU successfully patched!\n");
    ret = 0;
out:
    free(command_to_handler);
    return ret;
 
Since ptrace() only allows to read or write a word at a time, ptrace_read() and ptrace_write() are wrappers to read or write arbitrary large chunks of memory8. Here is the code for ptrace_read():
/* Read memory of the given process */
static void *
ptrace_read(pid_t pid, void *address, size_t size)
 
    /* Allocate the buffer */
    uword_t *buffer = malloc((size/sizeof(uword_t) + 1)*sizeof(uword_t));
    if (!buffer) return NULL;
    /* Read word by word */
    size_t readsz = 0;
    do  
        errno = 0;
        if ((buffer[readsz/sizeof(uword_t)] =
                ptrace(PTRACE_PEEKTEXT, pid,
                       (unsigned char*)address + readsz,
                       0)) && errno)  
            fprintf(stderr, "[!] Unable to peek one word at address %p: %m\n",
                    (unsigned char *)address + readsz);
            free(buffer);
            return NULL;
         
        readsz += sizeof(uword_t);
      while (readsz < size);
    return (unsigned char *)buffer;
 

Putting the pieces together The patcher is provided with the following information:
  • the PID of the process to be patched,
  • the command_to_handler[] offset from the symbol table, and
  • the build ID of the executable file used to get this offset (as a safety measure).
The main steps are:
  1. Attach to the process with ptrace().
  2. Get the executable name from /proc/PID/exe.
  3. Parse /proc/PID/maps to find the address of the text segment (it s the first one).
  4. Do some sanity checks:
    • check there is a ELF header at this location (4-byte magic number),
    • check the executable type (ET_EXEC for regular executables, ET_DYN for PIE), and
    • get the build ID and compare with the expected one.
  5. From the base address and the provided offset, compute the location of the command_to_handler[] array.
  6. Patch it.
You can find the complete patcher on GitHub.
$ ./patch --build-id 0995121eb46e2a4c13747ac2bad982829365c694 \
>         --offset 9f9d00 \
>         --pid 16833
[.] Attaching to PID 16833...
[*] Successfully attached to PID 16833
[*] Executable name is /usr/bin/qemu-system-x86_64
[*] Base address is 0x7f7eea912000
[*] Both build IDs match
[.] Retrieving command_to_handler table...
[.] Patching QEMU...
[*] QEMU successfully patched!

  1. The complete code for this test is on GitHub.
  2. An interesting project seems to be Katana. But there are also some insightful hacking papers on the subject.
  3. Some packages come with a -dbg package with debug symbols, some others don t. Fortunately, a proposal to automatically produce debugging symbols for everything is near completion.
  4. The Fedora Wiki contains the rationale behind the build ID.
  5. If the build is incorrectly reproduced, the build ID won t match. The information provided by the debug symbols may or may not be correct. Debian currently has a reproducible builds effort to ensure that each package can be reproduced.
  6. Anatomy of a program in memory is a great blog post explaining in more details how a program lives in memory.
  7. Being an uninitialized static variable, the variable is in the BSS section. This section is mapped to a writable memory segment. If it wasn t the case, with Linux, the ptrace() system call is still allowed to write. Linux will copy the page and mark it as private.
  8. With Linux 3.2 or later, process_vm_readv() and process_vm_writev() can be used to transfer data from/to a remote process without using ptrace() at all. However, ptrace() would still be needed to reliably stop the main thread.

Next.

Previous.